Dec 03 20:38:21 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 20:38:21 crc restorecon[4760]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:21 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 20:38:22 crc restorecon[4760]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 20:38:22 crc kubenswrapper[4765]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.196495 4765 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203160 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203196 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203207 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203217 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203228 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203238 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203246 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203254 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203262 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203270 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203278 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203288 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203324 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203332 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203340 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203348 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203356 4765 feature_gate.go:330] unrecognized feature gate: Example Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203366 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203376 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203385 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203393 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203402 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203410 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203418 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203426 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203442 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203451 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203459 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203495 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203504 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203512 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203520 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203529 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203538 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203546 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203554 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203563 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203571 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203579 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203587 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203594 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203603 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203611 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203621 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203629 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203638 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203645 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203654 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203661 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203670 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203677 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203685 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203693 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203702 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203709 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203718 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203726 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203734 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203743 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203750 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203758 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203767 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203774 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203782 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203803 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203813 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203822 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203831 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203839 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203847 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.203855 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204271 4765 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204293 4765 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204342 4765 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204354 4765 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204365 4765 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204374 4765 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204391 4765 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204402 4765 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204411 4765 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204421 4765 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204430 4765 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204442 4765 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204452 4765 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204461 4765 flags.go:64] FLAG: --cgroup-root="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204472 4765 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204482 4765 flags.go:64] FLAG: --client-ca-file="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204492 4765 flags.go:64] FLAG: --cloud-config="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204501 4765 flags.go:64] FLAG: --cloud-provider="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204510 4765 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204521 4765 flags.go:64] FLAG: --cluster-domain="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204530 4765 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204540 4765 flags.go:64] FLAG: --config-dir="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204549 4765 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204559 4765 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204570 4765 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204580 4765 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204589 4765 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204599 4765 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204608 4765 flags.go:64] FLAG: --contention-profiling="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204618 4765 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204627 4765 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204637 4765 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204646 4765 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204666 4765 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204675 4765 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204684 4765 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204692 4765 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204701 4765 flags.go:64] FLAG: --enable-server="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204711 4765 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204722 4765 flags.go:64] FLAG: --event-burst="100" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204731 4765 flags.go:64] FLAG: --event-qps="50" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204740 4765 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204749 4765 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204758 4765 flags.go:64] FLAG: --eviction-hard="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204769 4765 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204778 4765 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204787 4765 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204797 4765 flags.go:64] FLAG: --eviction-soft="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204807 4765 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204816 4765 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204824 4765 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204833 4765 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204842 4765 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204851 4765 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204860 4765 flags.go:64] FLAG: --feature-gates="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204870 4765 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204879 4765 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204888 4765 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204897 4765 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204907 4765 flags.go:64] FLAG: --healthz-port="10248" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204916 4765 flags.go:64] FLAG: --help="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204925 4765 flags.go:64] FLAG: --hostname-override="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204933 4765 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204942 4765 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204951 4765 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204960 4765 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204969 4765 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204978 4765 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204987 4765 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.204995 4765 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205004 4765 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205013 4765 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205023 4765 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205031 4765 flags.go:64] FLAG: --kube-reserved="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205040 4765 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205049 4765 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205059 4765 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205067 4765 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205076 4765 flags.go:64] FLAG: --lock-file="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205085 4765 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205095 4765 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205104 4765 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205118 4765 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205135 4765 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205145 4765 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205154 4765 flags.go:64] FLAG: --logging-format="text" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205163 4765 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205172 4765 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205181 4765 flags.go:64] FLAG: --manifest-url="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205189 4765 flags.go:64] FLAG: --manifest-url-header="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205201 4765 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205210 4765 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205221 4765 flags.go:64] FLAG: --max-pods="110" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205230 4765 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205239 4765 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205247 4765 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205256 4765 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205265 4765 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205274 4765 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205283 4765 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205335 4765 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205348 4765 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205360 4765 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205371 4765 flags.go:64] FLAG: --pod-cidr="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205382 4765 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205403 4765 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205414 4765 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205423 4765 flags.go:64] FLAG: --pods-per-core="0" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205432 4765 flags.go:64] FLAG: --port="10250" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205442 4765 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205451 4765 flags.go:64] FLAG: --provider-id="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205460 4765 flags.go:64] FLAG: --qos-reserved="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205469 4765 flags.go:64] FLAG: --read-only-port="10255" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205479 4765 flags.go:64] FLAG: --register-node="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205487 4765 flags.go:64] FLAG: --register-schedulable="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205498 4765 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205514 4765 flags.go:64] FLAG: --registry-burst="10" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205523 4765 flags.go:64] FLAG: --registry-qps="5" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205531 4765 flags.go:64] FLAG: --reserved-cpus="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205541 4765 flags.go:64] FLAG: --reserved-memory="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205552 4765 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205562 4765 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205571 4765 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205579 4765 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205588 4765 flags.go:64] FLAG: --runonce="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205597 4765 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205606 4765 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205616 4765 flags.go:64] FLAG: --seccomp-default="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205624 4765 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205633 4765 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205642 4765 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205652 4765 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205662 4765 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205670 4765 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205679 4765 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205688 4765 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205696 4765 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205706 4765 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205715 4765 flags.go:64] FLAG: --system-cgroups="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205724 4765 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205737 4765 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205746 4765 flags.go:64] FLAG: --tls-cert-file="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205755 4765 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205766 4765 flags.go:64] FLAG: --tls-min-version="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205775 4765 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205784 4765 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205792 4765 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205802 4765 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205813 4765 flags.go:64] FLAG: --v="2" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205824 4765 flags.go:64] FLAG: --version="false" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205836 4765 flags.go:64] FLAG: --vmodule="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205846 4765 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.205856 4765 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206062 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206073 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206084 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206093 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206102 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206111 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206121 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206128 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206136 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206144 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206158 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206166 4765 feature_gate.go:330] unrecognized feature gate: Example Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206174 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206182 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206190 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206199 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206206 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206215 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206225 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206234 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206243 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206253 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206262 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206270 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206278 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206288 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206338 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206349 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206357 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206365 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206374 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206384 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206395 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206405 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206414 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206423 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206432 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206440 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206450 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206458 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206467 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206475 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206487 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206495 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206503 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206514 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206524 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206532 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206542 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206551 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206559 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206567 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206576 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206584 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206592 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206601 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206609 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206617 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206625 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206632 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206640 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206649 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206657 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206664 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206672 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206680 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206687 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206695 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206703 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206710 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.206718 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.206741 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.218733 4765 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.218772 4765 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218925 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218948 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218958 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218967 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218976 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218985 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.218993 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219001 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219009 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219016 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219024 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219031 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219039 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219047 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219054 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219062 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219069 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219077 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219084 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219094 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219104 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219112 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219121 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219128 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219137 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219145 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219152 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219160 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219168 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219176 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219184 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219193 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219201 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219209 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219219 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219227 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219237 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219248 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219257 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219265 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219275 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219282 4765 feature_gate.go:330] unrecognized feature gate: Example Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219290 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219328 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219339 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219353 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219364 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219372 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219380 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219389 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219397 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219405 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219414 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219422 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219430 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219438 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219445 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219456 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219466 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219475 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219485 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219494 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219533 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219541 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219550 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219559 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219567 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219575 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219583 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219590 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219601 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.219614 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219833 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219848 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219860 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219868 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219878 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219887 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219896 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219904 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219911 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219921 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219929 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219936 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219944 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219951 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219959 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219967 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219974 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219984 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.219992 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220000 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220007 4765 feature_gate.go:330] unrecognized feature gate: Example Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220015 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220022 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220030 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220038 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220046 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220053 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220061 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220069 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220077 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220085 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220093 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220100 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220108 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220116 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220125 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220132 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220141 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220150 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220158 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220166 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220175 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220183 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220191 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220199 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220209 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220219 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220229 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220236 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220245 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220255 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220265 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220273 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220281 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220289 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220330 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220342 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220352 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220361 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220369 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220377 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220385 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220392 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220400 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220409 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220417 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220425 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220433 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220441 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220448 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.220458 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.220469 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.220673 4765 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.224681 4765 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.224804 4765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.225619 4765 server.go:997] "Starting client certificate rotation" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.225657 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.225876 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 22:36:12.025814942 +0000 UTC Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.226132 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.232890 4765 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.236078 4765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.236845 4765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.246887 4765 log.go:25] "Validated CRI v1 runtime API" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.272917 4765 log.go:25] "Validated CRI v1 image API" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.276053 4765 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.279212 4765 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-20-34-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.279251 4765 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.299678 4765 manager.go:217] Machine: {Timestamp:2025-12-03 20:38:22.298052142 +0000 UTC m=+0.228597333 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:139d9191-2874-499e-a609-baf6bc364e88 BootID:9228a112-591f-4ddf-8f52-901f725e75be Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7d:14:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7d:14:d7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7b:bf:4f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7e:04:2f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5e:33:3a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:25:e7:7d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:33:bc:13 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:55:80:df:74:4a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:55:12:83:8c:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.299971 4765 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.300277 4765 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.301347 4765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.301861 4765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.301919 4765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302158 4765 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302173 4765 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302578 4765 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302623 4765 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302881 4765 state_mem.go:36] "Initialized new in-memory state store" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.302990 4765 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.303553 4765 kubelet.go:418] "Attempting to sync node with API server" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.303573 4765 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.303603 4765 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.303621 4765 kubelet.go:324] "Adding apiserver pod source" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.303635 4765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.305356 4765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.305506 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.305567 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.305613 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.305638 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.305716 4765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306433 4765 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306903 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306923 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306931 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306939 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306950 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306956 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306963 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306973 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306981 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306988 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.306997 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.307003 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.307279 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.307786 4765 server.go:1280] "Started kubelet" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.307824 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.308025 4765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.308043 4765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.308577 4765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 20:38:22 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.309815 4765 server.go:460] "Adding debug handlers to kubelet server" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.309934 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.309977 4765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.310725 4765 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.310758 4765 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.313458 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:09:40.638251698 +0000 UTC Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.313528 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 429h31m18.32472724s for next certificate rotation Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.313547 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.314026 4765 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.310513 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.315206 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.315262 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.316374 4765 factory.go:55] Registering systemd factory Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.316453 4765 factory.go:221] Registration of the systemd container factory successfully Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.314744 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187dcf1152afeafa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 20:38:22.307756794 +0000 UTC m=+0.238301945,LastTimestamp:2025-12-03 20:38:22.307756794 +0000 UTC m=+0.238301945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.317705 4765 factory.go:153] Registering CRI-O factory Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.317730 4765 factory.go:221] Registration of the crio container factory successfully Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.317834 4765 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.317873 4765 factory.go:103] Registering Raw factory Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.317892 4765 manager.go:1196] Started watching for new ooms in manager Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.321174 4765 manager.go:319] Starting recovery of all containers Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.326566 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.326743 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.326802 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.326874 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.326932 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327026 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327100 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327175 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327256 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327351 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327427 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327505 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327576 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327665 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327775 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327888 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.327974 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328055 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328137 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328280 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328406 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328462 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328525 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328581 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328654 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328725 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328806 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328889 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.328966 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.329865 4765 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.329963 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330033 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330140 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330210 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330267 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330357 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330444 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330524 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330586 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330649 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330720 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330796 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330872 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.330954 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331028 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331109 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331187 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331260 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331407 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331507 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331597 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331680 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331797 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331898 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.331971 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332036 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332098 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332167 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332223 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332274 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332350 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332416 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332470 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332529 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332585 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332638 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332712 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332779 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332834 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332886 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.332939 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333011 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333101 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333161 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333215 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333268 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333335 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333399 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.333455 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334257 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334345 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334404 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334473 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334529 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334588 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334642 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334700 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334767 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334821 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334881 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.334952 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335040 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335134 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335196 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335250 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335316 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335373 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335428 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335491 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335550 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335621 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335678 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335761 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335826 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335909 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.335970 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337111 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337182 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337213 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337241 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337263 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337282 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337324 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337351 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337382 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337404 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337420 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337438 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337453 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337499 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337519 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337535 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337550 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337565 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337580 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337599 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337614 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337628 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337646 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337660 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337704 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337721 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337734 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337753 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337766 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337784 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337798 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337811 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337859 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337872 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337893 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337906 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337920 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337938 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337953 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337972 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.337988 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338005 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338024 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338036 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338048 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338062 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338073 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338064 4765 manager.go:324] Recovery completed Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338091 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338136 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338148 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338165 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338176 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338190 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338202 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338211 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338225 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338239 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338252 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338263 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338274 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338289 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338312 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338326 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338338 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338349 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338361 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338372 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338385 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338395 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338405 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338417 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338427 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338439 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338450 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338461 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338473 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338483 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338493 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338507 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338517 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338529 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338539 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338549 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338563 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338572 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338584 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338593 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338616 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338631 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338650 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338665 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338675 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338686 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338698 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338708 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338720 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338730 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338740 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338752 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338762 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338800 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338811 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338820 4765 reconstruct.go:97] "Volume reconstruction finished" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.338827 4765 reconciler.go:26] "Reconciler: start to sync state" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.351937 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.353157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.353183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.353192 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.355093 4765 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.355107 4765 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.355122 4765 state_mem.go:36] "Initialized new in-memory state store" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.356745 4765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.358476 4765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.358514 4765 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.358556 4765 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.358603 4765 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.362224 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.362309 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.362982 4765 policy_none.go:49] "None policy: Start" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.364034 4765 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.364066 4765 state_mem.go:35] "Initializing new in-memory state store" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.415022 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.422118 4765 manager.go:334] "Starting Device Plugin manager" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.422627 4765 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.422653 4765 server.go:79] "Starting device plugin registration server" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.423183 4765 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.423204 4765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.423398 4765 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.423497 4765 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.423511 4765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.431922 4765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.459055 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.459142 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.460663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.460722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.460731 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.460912 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.461205 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.461257 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.461815 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.461985 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462164 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462665 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462810 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.462847 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.463763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.463789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.463821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464081 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464231 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464452 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464580 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.464968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465005 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465109 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465253 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465290 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465884 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.465924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466344 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.466423 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.467276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.467313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.467321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.514383 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.523623 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.524929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.524974 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.524986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.525013 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.526138 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540511 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540555 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540575 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540594 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540679 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540722 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540745 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540764 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540784 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540802 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540861 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540904 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.540958 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.541015 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642662 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642785 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642827 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642858 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642887 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642915 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.642942 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643002 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643059 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643094 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643112 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643223 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643271 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643446 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643155 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643495 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643539 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643550 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643540 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643589 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643599 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643018 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643622 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.643555 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.726403 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.728361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.728415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.728473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.728514 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.729217 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.786596 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.807960 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.813690 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.816805 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6a576d836031d6adf4cd4b4743ef2fbbe628132d42493c250ace973042bba89e WatchSource:0}: Error finding container 6a576d836031d6adf4cd4b4743ef2fbbe628132d42493c250ace973042bba89e: Status 404 returned error can't find the container with id 6a576d836031d6adf4cd4b4743ef2fbbe628132d42493c250ace973042bba89e Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.846697 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2f626273d60864a2beccf4802f12fabe09d3a4128f17d0e8440a76260a321c66 WatchSource:0}: Error finding container 2f626273d60864a2beccf4802f12fabe09d3a4128f17d0e8440a76260a321c66: Status 404 returned error can't find the container with id 2f626273d60864a2beccf4802f12fabe09d3a4128f17d0e8440a76260a321c66 Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.851021 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c13f09ef9ad9b1fbce785228afe16a25687d4efcf85502eb54878c1c3e2fe8c2 WatchSource:0}: Error finding container c13f09ef9ad9b1fbce785228afe16a25687d4efcf85502eb54878c1c3e2fe8c2: Status 404 returned error can't find the container with id c13f09ef9ad9b1fbce785228afe16a25687d4efcf85502eb54878c1c3e2fe8c2 Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.851160 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: I1203 20:38:22.857966 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.876508 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-11967af21d22eea72a755f492fc33c4096564524c7a2aeb2c0fc265d83699d0f WatchSource:0}: Error finding container 11967af21d22eea72a755f492fc33c4096564524c7a2aeb2c0fc265d83699d0f: Status 404 returned error can't find the container with id 11967af21d22eea72a755f492fc33c4096564524c7a2aeb2c0fc265d83699d0f Dec 03 20:38:22 crc kubenswrapper[4765]: W1203 20:38:22.882694 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d590f095a94520809cec894e0931181b771cadcd0d3e6e5f6872a0fab675c47c WatchSource:0}: Error finding container d590f095a94520809cec894e0931181b771cadcd0d3e6e5f6872a0fab675c47c: Status 404 returned error can't find the container with id d590f095a94520809cec894e0931181b771cadcd0d3e6e5f6872a0fab675c47c Dec 03 20:38:22 crc kubenswrapper[4765]: E1203 20:38:22.915857 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.130373 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.132462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.132519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.132532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.132567 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.133127 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 03 20:38:23 crc kubenswrapper[4765]: W1203 20:38:23.230375 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.230474 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.308735 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:23 crc kubenswrapper[4765]: W1203 20:38:23.346679 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.346878 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:23 crc kubenswrapper[4765]: W1203 20:38:23.358722 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.358836 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.365103 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704" exitCode=0 Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.365189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.365346 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2f626273d60864a2beccf4802f12fabe09d3a4128f17d0e8440a76260a321c66"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.365491 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.368826 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.368879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.368898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.372833 4765 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6acbb58cda2fb4e987a0f1693722b4d489cc3154e1fd8fe275192d0bc7e341a0" exitCode=0 Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.372890 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6acbb58cda2fb4e987a0f1693722b4d489cc3154e1fd8fe275192d0bc7e341a0"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.372981 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6a576d836031d6adf4cd4b4743ef2fbbe628132d42493c250ace973042bba89e"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.373141 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.374520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.374607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.374677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.374834 4765 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe" exitCode=0 Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.374941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.375449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d590f095a94520809cec894e0931181b771cadcd0d3e6e5f6872a0fab675c47c"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.375604 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.376563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.376620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.376639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.377495 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.377618 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11967af21d22eea72a755f492fc33c4096564524c7a2aeb2c0fc265d83699d0f"} Dec 03 20:38:23 crc kubenswrapper[4765]: W1203 20:38:23.378378 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.65:6443: connect: connection refused Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.378553 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.65:6443: connect: connection refused" logger="UnhandledError" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.379520 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9" exitCode=0 Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.379605 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.379683 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c13f09ef9ad9b1fbce785228afe16a25687d4efcf85502eb54878c1c3e2fe8c2"} Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.379868 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.381068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.381096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.381110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.382977 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.385743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.385783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.385798 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.716849 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.933815 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.936252 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.936397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.936409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:23 crc kubenswrapper[4765]: I1203 20:38:23.936433 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:23 crc kubenswrapper[4765]: E1203 20:38:23.937585 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.65:6443: connect: connection refused" node="crc" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.385595 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.385654 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.385667 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.385795 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.386701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.386726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.386738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.389843 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.389877 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.389893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.389979 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.391578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.391605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.391619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.393579 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.393632 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.393650 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.393662 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.394854 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.397127 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1" exitCode=0 Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.397192 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.397345 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.398107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.398136 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.398146 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.399998 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"02afd98c1bad1155da02497a9b0809d01dcbcd2fe6049b3b4979d03a77f2d267"} Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.400120 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.401902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.401932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:24 crc kubenswrapper[4765]: I1203 20:38:24.401941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.414017 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6"} Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.414122 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.415515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.415565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.415580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.417935 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03" exitCode=0 Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.418002 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03"} Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.418124 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.419584 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.421999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.422087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.422104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.422551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.422583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.422596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.538622 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.540083 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.540162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.540183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:25 crc kubenswrapper[4765]: I1203 20:38:25.540226 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.314048 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.319811 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424164 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a"} Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf"} Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424270 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424348 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424277 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb"} Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424418 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49"} Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.424448 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.425351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.425397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.425410 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.426271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.426365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.426385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.787097 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.799535 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.799778 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.801376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.801430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:26 crc kubenswrapper[4765]: I1203 20:38:26.801449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.136487 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.433344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32"} Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.433511 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.433535 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.433577 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.433588 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435689 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.435944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:27 crc kubenswrapper[4765]: I1203 20:38:27.807470 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.440282 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.440403 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.440282 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.446514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.446574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.446594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.447247 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:28 crc kubenswrapper[4765]: I1203 20:38:28.723003 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.442883 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.442904 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444215 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444390 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444432 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:29 crc kubenswrapper[4765]: I1203 20:38:29.444445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:30 crc kubenswrapper[4765]: I1203 20:38:30.427059 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 20:38:30 crc kubenswrapper[4765]: I1203 20:38:30.427428 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:30 crc kubenswrapper[4765]: I1203 20:38:30.429189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:30 crc kubenswrapper[4765]: I1203 20:38:30.429239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:30 crc kubenswrapper[4765]: I1203 20:38:30.429259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.373359 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.373645 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.375267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.375346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.375361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:32 crc kubenswrapper[4765]: E1203 20:38:32.432570 4765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.941002 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.941243 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.942756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.942813 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:32 crc kubenswrapper[4765]: I1203 20:38:32.942834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:34 crc kubenswrapper[4765]: I1203 20:38:34.310523 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 20:38:34 crc kubenswrapper[4765]: E1203 20:38:34.397215 4765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 20:38:35 crc kubenswrapper[4765]: W1203 20:38:35.232839 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.232991 4765 trace.go:236] Trace[1076916370]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 20:38:25.230) (total time: 10002ms): Dec 03 20:38:35 crc kubenswrapper[4765]: Trace[1076916370]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (20:38:35.232) Dec 03 20:38:35 crc kubenswrapper[4765]: Trace[1076916370]: [10.002379215s] [10.002379215s] END Dec 03 20:38:35 crc kubenswrapper[4765]: E1203 20:38:35.233032 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.254315 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.254389 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.264211 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.264320 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.941037 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 20:38:35 crc kubenswrapper[4765]: I1203 20:38:35.941152 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 20:38:36 crc kubenswrapper[4765]: I1203 20:38:36.497499 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 20:38:36 crc kubenswrapper[4765]: I1203 20:38:36.497589 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.145162 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.145409 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.146588 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.146660 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.146873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.146906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.146922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.152768 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.476134 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.476862 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.476958 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.477561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.477614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.477631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.815546 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.815718 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.817216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.817289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:37 crc kubenswrapper[4765]: I1203 20:38:37.817370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:38 crc kubenswrapper[4765]: I1203 20:38:38.408086 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 20:38:38 crc kubenswrapper[4765]: I1203 20:38:38.427082 4765 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 20:38:39 crc kubenswrapper[4765]: I1203 20:38:39.277106 4765 csr.go:261] certificate signing request csr-xw8px is approved, waiting to be issued Dec 03 20:38:39 crc kubenswrapper[4765]: I1203 20:38:39.284387 4765 csr.go:257] certificate signing request csr-xw8px is issued Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.253954 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.255656 4765 trace.go:236] Trace[1968148479]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 20:38:25.676) (total time: 14579ms): Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[1968148479]: ---"Objects listed" error: 14579ms (20:38:40.255) Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[1968148479]: [14.579287514s] [14.579287514s] END Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.255714 4765 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.255763 4765 trace.go:236] Trace[1284779109]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 20:38:26.066) (total time: 14189ms): Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[1284779109]: ---"Objects listed" error: 14189ms (20:38:40.255) Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[1284779109]: [14.18912956s] [14.18912956s] END Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.255781 4765 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.255789 4765 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.257367 4765 trace.go:236] Trace[169008421]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 20:38:26.237) (total time: 14020ms): Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[169008421]: ---"Objects listed" error: 14019ms (20:38:40.257) Dec 03 20:38:40 crc kubenswrapper[4765]: Trace[169008421]: [14.020192661s] [14.020192661s] END Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.257412 4765 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.259931 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.285427 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-03 20:33:39 +0000 UTC, rotation deadline is 2026-10-21 20:23:39.724658536 +0000 UTC Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.285732 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7727h44m59.438932672s for next certificate rotation Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.316025 4765 apiserver.go:52] "Watching apiserver" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.318940 4765 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.319259 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.319812 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.320038 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.320511 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.320639 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.320761 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.320923 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.320961 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.320984 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.320808 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.324041 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.324086 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.324049 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.324257 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.324752 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.325182 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.325267 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.325426 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.325522 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.358835 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.373014 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.388529 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.400235 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.413100 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.414963 4765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.426745 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.437705 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457135 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457188 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457215 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457236 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457263 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457292 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457349 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457382 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457412 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457445 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457475 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457507 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457532 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457537 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457603 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457627 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457651 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457641 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457675 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457702 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457724 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457747 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457807 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457833 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457856 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457863 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457880 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457945 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457966 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.457988 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458011 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458035 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458062 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458112 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458174 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458225 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458250 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458271 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458316 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458401 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458427 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458451 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458503 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458554 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458575 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458599 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458622 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458649 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458673 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458726 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458749 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458770 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458791 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458813 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458836 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459973 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460016 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460048 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460072 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460098 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460124 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460147 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460169 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460191 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460214 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460237 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460258 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460280 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460861 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460899 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460923 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460980 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461004 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461028 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461053 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461078 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461097 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461120 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461142 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461164 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461185 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461207 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461228 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461249 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461270 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461292 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461397 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462793 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462845 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462863 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462879 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462894 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462910 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462926 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462942 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462960 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462996 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463012 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463072 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463088 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463104 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463139 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463242 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463259 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463274 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463308 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463410 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463455 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464231 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464261 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464285 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464324 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464357 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464390 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464428 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464444 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464847 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464890 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464908 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464923 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464970 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458291 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458360 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458457 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458543 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458611 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458617 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458635 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465450 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465531 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458722 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458780 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458912 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458888 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459024 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.458974 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459264 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459345 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459376 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459405 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459418 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459513 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459622 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459741 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459809 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459836 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459861 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.459982 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460020 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460056 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460068 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460267 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460509 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460514 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460640 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460661 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460664 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.460946 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461122 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461155 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461473 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461569 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461861 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.461835 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462227 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462729 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.462747 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463382 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463787 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.463969 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464613 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464710 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464975 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464987 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465571 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465745 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465843 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465946 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467163 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467168 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465981 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.466609 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.466674 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.466777 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467050 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467071 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467089 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467103 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.465970 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.464988 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467589 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467579 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467605 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467635 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467641 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467670 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467740 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467873 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467978 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.467947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468122 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468176 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468176 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468339 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468229 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468379 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468414 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468503 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468534 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468561 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468587 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468611 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468634 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468660 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468685 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468711 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468735 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468759 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468781 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468803 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468833 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468861 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468883 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468905 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468921 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468936 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468951 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468968 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468997 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469020 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469042 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469065 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469088 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469110 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469324 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469397 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469419 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469441 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469462 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469484 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469506 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469527 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469549 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469571 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469596 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469626 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469650 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469672 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469694 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469715 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469738 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469762 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468545 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468704 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468630 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468727 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468847 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.468939 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469295 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469424 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469527 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.469611 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.471486 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.471990 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.475812 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.471983 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.472115 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.472409 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.472625 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473281 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473325 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473459 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473474 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.473651 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.473668 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:38:40.973588233 +0000 UTC m=+18.904133394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.474118 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.474208 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.474749 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.474898 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.476142 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.476254 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.476414 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477696 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477753 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477838 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477934 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.478045 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.478670 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.478731 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477597 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.478845 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.478916 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.479106 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.477292 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.479683 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.479758 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.479890 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480088 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480236 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480249 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480265 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480801 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.480921 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481060 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481081 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481118 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481105 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481157 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481196 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481223 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481239 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481274 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481287 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481322 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481341 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481363 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481391 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481496 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481516 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481550 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481658 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481673 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481726 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481749 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481773 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481778 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481860 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481876 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481924 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.482074 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.482199 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.482718 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.482753 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.482863 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.483032 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483332 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483439 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.483475 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:40.98343548 +0000 UTC m=+18.913980651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483515 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483571 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.481865 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483636 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483666 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483741 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.483233 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.484042 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.484402 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.484869 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.485005 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:40.984980693 +0000 UTC m=+18.915525854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485088 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485255 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485513 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485611 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485944 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.486552 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485698 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.485902 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.486862 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487009 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487035 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487092 4765 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487114 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487181 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487208 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487225 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487262 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487290 4765 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487371 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487393 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487447 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487469 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487497 4765 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487515 4765 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487816 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487877 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487955 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.487973 4765 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488026 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488050 4765 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488076 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488117 4765 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488132 4765 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488149 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488162 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488219 4765 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488335 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.488595 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.489692 4765 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.489771 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492194 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492254 4765 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492273 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492288 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492314 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492328 4765 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492340 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492352 4765 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492364 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492377 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492392 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492405 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492417 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492429 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492441 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492452 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492463 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492474 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492485 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492497 4765 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492508 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492519 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492531 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492542 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492553 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492565 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492577 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492586 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492598 4765 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492613 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492626 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492639 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492652 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492664 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492676 4765 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492688 4765 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492699 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492711 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492722 4765 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492734 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492745 4765 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492758 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492770 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492782 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492794 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492806 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492817 4765 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492831 4765 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492845 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492861 4765 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492874 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492886 4765 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492897 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492909 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492921 4765 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492935 4765 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492949 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492961 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492974 4765 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492987 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.492999 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493011 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493022 4765 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493035 4765 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493047 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493059 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493251 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493270 4765 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493281 4765 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493309 4765 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493323 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493336 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493349 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493361 4765 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493374 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493386 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493398 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493411 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493423 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493435 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493450 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493463 4765 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493476 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493495 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493506 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493518 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493531 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493543 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493554 4765 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493566 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493577 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493593 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493606 4765 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493618 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493629 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493641 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493651 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493663 4765 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493676 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493689 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493702 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493715 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493726 4765 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493738 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493750 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493763 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493774 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493785 4765 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493797 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493808 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493821 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493833 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493847 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493859 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493871 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493882 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493894 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493905 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493918 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493930 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493942 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493954 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493965 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493980 4765 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.493992 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.494004 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.494016 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.494028 4765 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.495896 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.495913 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.495926 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.495980 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:40.995963661 +0000 UTC m=+18.926508812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501132 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501397 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501571 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501590 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501739 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.501945 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.502834 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.502861 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.502905 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.502951 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.502898 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.502968 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:41.002945987 +0000 UTC m=+18.933491258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.503816 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.504688 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.504756 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.504761 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.505851 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.506242 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.507198 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.510424 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.513652 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.513926 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.514005 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.514229 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.514237 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.515463 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.516605 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.516639 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.516736 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.517056 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.517394 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.517485 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.517735 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.518028 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.518267 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.518313 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.519353 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.521597 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.531262 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.534666 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.540136 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.545152 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595319 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595366 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595409 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595423 4765 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595437 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595494 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595547 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595633 4765 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595665 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595676 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595686 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595698 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595707 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595710 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595715 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595747 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595756 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595764 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595775 4765 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595783 4765 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595791 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595799 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595807 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595815 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595825 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595834 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595842 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595851 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595859 4765 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595868 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595876 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595885 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595893 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595902 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595910 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595920 4765 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595928 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595936 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595946 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595955 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.595963 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.640515 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.656225 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.665576 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 20:38:40 crc kubenswrapper[4765]: W1203 20:38:40.668823 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-1bde0ffd6f5fdd510a7725cc8a450cfee19fd6a33174b3d32819eae8774994a7 WatchSource:0}: Error finding container 1bde0ffd6f5fdd510a7725cc8a450cfee19fd6a33174b3d32819eae8774994a7: Status 404 returned error can't find the container with id 1bde0ffd6f5fdd510a7725cc8a450cfee19fd6a33174b3d32819eae8774994a7 Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.891131 4765 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.998740 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.998929 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:38:41.998874848 +0000 UTC m=+19.929419999 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.999005 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.999071 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:40 crc kubenswrapper[4765]: I1203 20:38:40.999104 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999245 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999259 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999286 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999242 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999320 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999328 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:41.999320851 +0000 UTC m=+19.929866002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999492 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:41.999463595 +0000 UTC m=+19.930008746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:40 crc kubenswrapper[4765]: E1203 20:38:40.999509 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:41.999501396 +0000 UTC m=+19.930046647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.100326 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:41 crc kubenswrapper[4765]: E1203 20:38:41.100478 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:41 crc kubenswrapper[4765]: E1203 20:38:41.100493 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:41 crc kubenswrapper[4765]: E1203 20:38:41.100504 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:41 crc kubenswrapper[4765]: E1203 20:38:41.100555 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:42.10054072 +0000 UTC m=+20.031085871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.493390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.493445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"709949623a08747ba902d48ba97d881b13f839397351fec268d5115faebc27b3"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.495655 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.497811 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6" exitCode=255 Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.497859 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.499926 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.504474 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.504530 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"deda05df1b208898c2b1d7afc4845c8d095b62d66b241ede8991dcf8b5a0440d"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.504546 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1bde0ffd6f5fdd510a7725cc8a450cfee19fd6a33174b3d32819eae8774994a7"} Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.510557 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.525828 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.538194 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.545026 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.545153 4765 scope.go:117] "RemoveContainer" containerID="2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.559862 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.582119 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.585611 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-b2gnt"] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.586028 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.590150 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.590552 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.590702 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.599859 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.616443 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.627017 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.639240 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.650978 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.662403 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.678247 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.697027 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.704244 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ecd2b1c-dbec-4009-b852-74060586afa0-hosts-file\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.704291 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2657l\" (UniqueName: \"kubernetes.io/projected/2ecd2b1c-dbec-4009-b852-74060586afa0-kube-api-access-2657l\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.717917 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:41Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.805783 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ecd2b1c-dbec-4009-b852-74060586afa0-hosts-file\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.805839 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2657l\" (UniqueName: \"kubernetes.io/projected/2ecd2b1c-dbec-4009-b852-74060586afa0-kube-api-access-2657l\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.805952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2ecd2b1c-dbec-4009-b852-74060586afa0-hosts-file\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.830307 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2657l\" (UniqueName: \"kubernetes.io/projected/2ecd2b1c-dbec-4009-b852-74060586afa0-kube-api-access-2657l\") pod \"node-resolver-b2gnt\" (UID: \"2ecd2b1c-dbec-4009-b852-74060586afa0\") " pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.898995 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b2gnt" Dec 03 20:38:41 crc kubenswrapper[4765]: W1203 20:38:41.908626 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ecd2b1c_dbec_4009_b852_74060586afa0.slice/crio-27be1ebaba216531b306bf8afa46c0c1861748c736744b6e072a8ecc8f8e98c1 WatchSource:0}: Error finding container 27be1ebaba216531b306bf8afa46c0c1861748c736744b6e072a8ecc8f8e98c1: Status 404 returned error can't find the container with id 27be1ebaba216531b306bf8afa46c0c1861748c736744b6e072a8ecc8f8e98c1 Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.990214 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-swqqp"] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.990691 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-952wr"] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.990825 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.991708 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.994231 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.994481 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-p9xkg"] Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.994546 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.994700 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.994964 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.995030 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p9xkg" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.996215 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.996312 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.996249 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.996897 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.997411 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 20:38:41 crc kubenswrapper[4765]: I1203 20:38:41.997670 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.004186 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.006218 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.006805 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.006890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.006946 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.006975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007125 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007175 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007191 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007242 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:44.007224763 +0000 UTC m=+21.937769914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007634 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:38:44.007623024 +0000 UTC m=+21.938168175 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007703 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007738 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:44.007729417 +0000 UTC m=+21.938274568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007734 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.007799 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:44.007782548 +0000 UTC m=+21.938327699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.024717 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.069626 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.101653 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107390 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-conf-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107430 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-kubelet\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107447 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-k8s-cni-cncf-io\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107464 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-netns\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107497 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107521 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-daemon-config\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107541 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-os-release\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj426\" (UniqueName: \"kubernetes.io/projected/2d91ef96-b0c9-43eb-8d49-e522199942c9-kube-api-access-lj426\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107576 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-multus\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107595 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-hostroot\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107612 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-rootfs\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107629 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cnibin\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107645 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-os-release\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107660 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-bin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107675 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-multus-certs\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107714 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107741 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107759 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86ml\" (UniqueName: \"kubernetes.io/projected/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-kube-api-access-d86ml\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107790 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-etc-kubernetes\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107824 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-system-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107841 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-socket-dir-parent\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107854 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-system-cni-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107874 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107889 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-cni-binary-copy\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107905 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-proxy-tls\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107922 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvn24\" (UniqueName: \"kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.107940 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-cnibin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.108013 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.108039 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.108049 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.108097 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:44.108082842 +0000 UTC m=+22.038627993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.119859 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.134405 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.157891 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.171749 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.187992 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.204028 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208578 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-cni-binary-copy\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208632 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-cnibin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208686 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-proxy-tls\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208713 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvn24\" (UniqueName: \"kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208737 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-kubelet\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208758 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-conf-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208782 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-k8s-cni-cncf-io\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-netns\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208829 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208850 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-cnibin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208894 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-netns\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208872 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-daemon-config\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-conf-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.208855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-kubelet\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209046 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-os-release\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-k8s-cni-cncf-io\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj426\" (UniqueName: \"kubernetes.io/projected/2d91ef96-b0c9-43eb-8d49-e522199942c9-kube-api-access-lj426\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209167 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209166 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-rootfs\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209198 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-rootfs\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209249 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-multus\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-hostroot\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209328 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-hostroot\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-cni-binary-copy\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209370 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-multus\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209411 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cnibin\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209442 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-os-release\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209453 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-os-release\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209473 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cnibin\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209522 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-os-release\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209547 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-bin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209554 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209567 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-daemon-config\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209579 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-multus-certs\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209597 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-var-lib-cni-bin\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209665 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-host-run-multus-certs\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209706 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209738 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209785 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209807 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86ml\" (UniqueName: \"kubernetes.io/projected/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-kube-api-access-d86ml\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209831 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-etc-kubernetes\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209857 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-system-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-socket-dir-parent\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209902 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-system-cni-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.209959 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-system-cni-dir\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210112 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-etc-kubernetes\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210454 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-system-cni-dir\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210493 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210498 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d91ef96-b0c9-43eb-8d49-e522199942c9-multus-socket-dir-parent\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210679 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.210710 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.211946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-proxy-tls\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.223160 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.224766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86ml\" (UniqueName: \"kubernetes.io/projected/3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9-kube-api-access-d86ml\") pod \"multus-additional-cni-plugins-952wr\" (UID: \"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\") " pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.226400 4765 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.226615 4765 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.226687 4765 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.226800 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cvn24 for pod openshift-machine-config-operator/machine-config-daemon-swqqp: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": read tcp 38.102.83.65:45668->38.102.83.65:6443: use of closed network connection Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.226857 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24 podName:f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:42.726840463 +0000 UTC m=+20.657385604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cvn24" (UniqueName: "kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24") pod "machine-config-daemon-swqqp" (UID: "f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": read tcp 38.102.83.65:45668->38.102.83.65:6443: use of closed network connection Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.226904 4765 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227104 4765 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227145 4765 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227187 4765 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227215 4765 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227361 4765 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227371 4765 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227424 4765 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227384 4765 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227443 4765 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.227501 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj426\" (UniqueName: \"kubernetes.io/projected/2d91ef96-b0c9-43eb-8d49-e522199942c9-kube-api-access-lj426\") pod \"multus-p9xkg\" (UID: \"2d91ef96-b0c9-43eb-8d49-e522199942c9\") " pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227688 4765 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227767 4765 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.227897 4765 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.326061 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-952wr" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.335541 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p9xkg" Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.340369 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fea2b65_0ca8_4eb1_ab58_0e990be1d0a9.slice/crio-5a6047834267ecb986eb8a1b022b1b5a63c8f9c4daecea1d9803a5f6413fa8da WatchSource:0}: Error finding container 5a6047834267ecb986eb8a1b022b1b5a63c8f9c4daecea1d9803a5f6413fa8da: Status 404 returned error can't find the container with id 5a6047834267ecb986eb8a1b022b1b5a63c8f9c4daecea1d9803a5f6413fa8da Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.354927 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d91ef96_b0c9_43eb_8d49_e522199942c9.slice/crio-f0782633d6afc7fb317be0c4b74fc7b6c0c6847a1ad0a1d5f79023034a3c4527 WatchSource:0}: Error finding container f0782633d6afc7fb317be0c4b74fc7b6c0c6847a1ad0a1d5f79023034a3c4527: Status 404 returned error can't find the container with id f0782633d6afc7fb317be0c4b74fc7b6c0c6847a1ad0a1d5f79023034a3c4527 Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.358876 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.358887 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.358989 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.359046 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.359121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:42 crc kubenswrapper[4765]: E1203 20:38:42.359394 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.367329 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.368286 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.369807 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.370701 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.371999 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.372710 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.373443 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.374708 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.375496 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.376888 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.380434 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.381377 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.383550 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.385099 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.387458 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.388100 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.388902 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.390018 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.390950 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.392963 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.394875 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.395941 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.396546 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.398049 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.399147 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.403246 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.404440 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.405627 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.406430 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.408066 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.408606 4765 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.408702 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.410965 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.411586 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.412062 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.414035 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.415251 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.415935 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.417005 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.417690 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.418625 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.419225 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.420504 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.421491 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.421987 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.422562 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.423514 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.424635 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.425142 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.425644 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.426549 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.427082 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.428955 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.429573 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.430264 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.430326 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9dzdh"] Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.431202 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.437835 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.440439 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.441707 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.442004 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.442251 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.442467 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.444166 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.447037 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.455417 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.506768 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b2gnt" event={"ID":"2ecd2b1c-dbec-4009-b852-74060586afa0","Type":"ContainerStarted","Data":"94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb"} Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.506814 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b2gnt" event={"ID":"2ecd2b1c-dbec-4009-b852-74060586afa0","Type":"ContainerStarted","Data":"27be1ebaba216531b306bf8afa46c0c1861748c736744b6e072a8ecc8f8e98c1"} Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.508067 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerStarted","Data":"f0782633d6afc7fb317be0c4b74fc7b6c0c6847a1ad0a1d5f79023034a3c4527"} Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.508659 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerStarted","Data":"5a6047834267ecb986eb8a1b022b1b5a63c8f9c4daecea1d9803a5f6413fa8da"} Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513217 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513245 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513262 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513279 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513310 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513353 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4c4\" (UniqueName: \"kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513389 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513407 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513430 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513447 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513463 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513478 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513517 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513533 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513551 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513576 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.513593 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.514932 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.517001 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432"} Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.517273 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615038 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615099 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615118 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4c4\" (UniqueName: \"kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615143 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615161 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615178 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615201 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615217 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615266 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615312 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615328 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615372 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.615385 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.616501 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.616592 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.616766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.616626 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.616868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617377 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617540 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617695 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617780 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.617939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.618027 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.618550 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621025 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621107 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621146 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621204 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621879 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.621915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.622360 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.622734 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.640196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4c4\" (UniqueName: \"kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4\") pod \"ovnkube-node-9dzdh\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.770568 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.788391 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad2eb102_7abd_48ad_8287_ab7d2d8a4166.slice/crio-5a8a417a4bf296c5b255e2abd0147fa22b7c9a3c8bd9570a92ca7d94204c7c23 WatchSource:0}: Error finding container 5a8a417a4bf296c5b255e2abd0147fa22b7c9a3c8bd9570a92ca7d94204c7c23: Status 404 returned error can't find the container with id 5a8a417a4bf296c5b255e2abd0147fa22b7c9a3c8bd9570a92ca7d94204c7c23 Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.825637 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvn24\" (UniqueName: \"kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.844981 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvn24\" (UniqueName: \"kubernetes.io/projected/f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5-kube-api-access-cvn24\") pod \"machine-config-daemon-swqqp\" (UID: \"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\") " pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.908802 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.945679 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.949784 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:38:42 crc kubenswrapper[4765]: I1203 20:38:42.955338 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 20:38:42 crc kubenswrapper[4765]: W1203 20:38:42.976933 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e50fba_2d3f_420f_b6fb_cc6a7c8d9eb5.slice/crio-075ca16ac849dd71052b2be734e6aa525629c7e8a2b74788004e7de3350e8d13 WatchSource:0}: Error finding container 075ca16ac849dd71052b2be734e6aa525629c7e8a2b74788004e7de3350e8d13: Status 404 returned error can't find the container with id 075ca16ac849dd71052b2be734e6aa525629c7e8a2b74788004e7de3350e8d13 Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.046822 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.193437 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.243051 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.255547 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.259443 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.274293 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.286452 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.295987 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.299411 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.307405 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.307417 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.312415 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.323141 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.336955 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.346537 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.358236 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.368457 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.375124 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.386014 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.399670 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.415061 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.428172 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.449400 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.460563 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.462829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.462883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.462899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.463026 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.470414 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.470898 4765 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.471326 4765 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.472885 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.472922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.472936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.472954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.472966 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.482951 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.489519 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.493641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.493676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.493688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.493705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.493716 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.494404 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.505448 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.507052 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.508615 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.508649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.508658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.508672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.508680 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.519768 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerStarted","Data":"20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.520407 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.520660 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.522247 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67" exitCode=0 Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.522359 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.523165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.523193 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.523204 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.523218 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.523228 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.525079 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" exitCode=0 Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.525131 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.525149 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"5a8a417a4bf296c5b255e2abd0147fa22b7c9a3c8bd9570a92ca7d94204c7c23"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.530729 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.533703 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.533743 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.533755 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"075ca16ac849dd71052b2be734e6aa525629c7e8a2b74788004e7de3350e8d13"} Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.536736 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.536978 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.540440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.540472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.540481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.540495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.540504 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.545908 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.551347 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.552089 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: E1203 20:38:43.552218 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.554478 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.554508 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.554520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.554539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.554550 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.566143 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.580283 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.593342 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.602859 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.614144 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.619865 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.631386 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.634241 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.644137 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.657642 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.657686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.657698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.657717 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.657730 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.658525 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.669328 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.678050 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.679409 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.690647 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.694449 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.700411 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.704211 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.713690 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.732277 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.744566 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.744876 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.756572 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.761379 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.761413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.761425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.761442 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.761455 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.771050 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.771414 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.786547 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.802956 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.817287 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.838561 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.852123 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865403 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865512 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.865530 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.884018 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.898502 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.915050 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.926759 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.941792 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.959153 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:43Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.969864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.969896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.969906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.969919 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:43 crc kubenswrapper[4765]: I1203 20:38:43.969928 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:43Z","lastTransitionTime":"2025-12-03T20:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.036398 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.036505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.036538 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.036558 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.036633 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.036680 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:48.036666909 +0000 UTC m=+25.967212060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.036967 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:38:48.036959907 +0000 UTC m=+25.967505058 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037032 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037053 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:48.03704788 +0000 UTC m=+25.967593031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037103 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037114 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037124 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.037143 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:48.037137972 +0000 UTC m=+25.967683113 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.073056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.073096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.073107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.073124 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.073134 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.137713 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.137882 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.137902 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.137915 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.137971 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:48.1379536 +0000 UTC m=+26.068498751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.175025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.175078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.175096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.175120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.175136 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.278288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.278748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.278768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.278794 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.278812 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.298426 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lbw96"] Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.298877 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.301114 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.301116 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.301272 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.303091 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.317639 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.337183 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.359255 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.359262 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.359426 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.359370 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.359495 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:44 crc kubenswrapper[4765]: E1203 20:38:44.359629 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.373104 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.385537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.385573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.385581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.385596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.385605 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.394802 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.406209 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.416442 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.430067 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.440866 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-host\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.440907 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-serviceca\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.440927 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zlh\" (UniqueName: \"kubernetes.io/projected/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-kube-api-access-w7zlh\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.442334 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.456011 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.476554 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488279 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488287 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.488781 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.501113 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.512440 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.521439 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.538352 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694" exitCode=0 Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.538500 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541422 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-host\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541447 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-serviceca\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541466 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zlh\" (UniqueName: \"kubernetes.io/projected/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-kube-api-access-w7zlh\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541519 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-host\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541884 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.541986 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.542074 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.542194 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.542400 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.542493 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.542580 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-serviceca\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.545627 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.557124 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.561573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zlh\" (UniqueName: \"kubernetes.io/projected/a392f98a-e249-4c20-a5b0-aeddb4cc0ad7-kube-api-access-w7zlh\") pod \"node-ca-lbw96\" (UID: \"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\") " pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.570490 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.584420 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.590383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.590428 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.590440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.590460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.590472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.597021 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.608291 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.611478 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lbw96" Dec 03 20:38:44 crc kubenswrapper[4765]: W1203 20:38:44.625157 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda392f98a_e249_4c20_a5b0_aeddb4cc0ad7.slice/crio-81bb9e453c1ae9f8df85a92f74b7e80a4ac7fa7cd0af773e2d0b3b595b5affa3 WatchSource:0}: Error finding container 81bb9e453c1ae9f8df85a92f74b7e80a4ac7fa7cd0af773e2d0b3b595b5affa3: Status 404 returned error can't find the container with id 81bb9e453c1ae9f8df85a92f74b7e80a4ac7fa7cd0af773e2d0b3b595b5affa3 Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.627363 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.637845 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.647095 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.682992 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.692959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.692993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.693001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.693015 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.693023 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.717818 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.758494 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.795598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.795626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.795634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.795648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.795656 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.798853 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.837712 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.876158 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.897544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.897574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.897583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.897596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.897605 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:44Z","lastTransitionTime":"2025-12-03T20:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:44 crc kubenswrapper[4765]: I1203 20:38:44.922786 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:44Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.000144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.000185 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.000196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.000213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.000223 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.102716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.103092 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.103103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.103121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.103135 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.206582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.206630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.206643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.206666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.206683 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.309026 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.309110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.309125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.309144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.309156 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.411899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.411959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.411973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.411992 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.412002 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.514600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.514639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.514648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.514667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.514679 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.546060 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lbw96" event={"ID":"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7","Type":"ContainerStarted","Data":"5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.546113 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lbw96" event={"ID":"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7","Type":"ContainerStarted","Data":"81bb9e453c1ae9f8df85a92f74b7e80a4ac7fa7cd0af773e2d0b3b595b5affa3"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.548159 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d" exitCode=0 Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.548189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.570016 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.583867 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.597786 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.611139 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.617702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.617728 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.617737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.617752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.617762 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.621725 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.640459 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.653543 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.665988 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.681175 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.691491 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.707567 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.720226 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.720265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.720275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.720308 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.720318 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.721492 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.735048 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.746795 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.763161 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.774510 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.788529 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.803714 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.816286 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.822155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.822200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.822211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.822229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.822241 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.828504 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.848680 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.859733 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.870391 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.886483 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.919409 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.925173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.925213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.925222 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.925238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.925250 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:45Z","lastTransitionTime":"2025-12-03T20:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.956745 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:45 crc kubenswrapper[4765]: I1203 20:38:45.997791 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.028118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.028189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.028206 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.028234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.028254 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.042463 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.080690 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.123390 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.132211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.132256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.132272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.132318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.132331 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.235236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.235388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.235415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.235450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.235472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.338723 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.338784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.338801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.338825 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.338843 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.359400 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.359467 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.359432 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:46 crc kubenswrapper[4765]: E1203 20:38:46.359642 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:46 crc kubenswrapper[4765]: E1203 20:38:46.359873 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:46 crc kubenswrapper[4765]: E1203 20:38:46.360054 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.441818 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.441874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.441892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.441916 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.441933 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.544408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.544470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.544517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.544549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.544573 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.554462 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf" exitCode=0 Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.554515 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.578213 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.596442 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.612699 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.637909 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.648360 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.648395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.648404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.648419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.648429 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.656012 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.668956 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.683883 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.693496 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.709313 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.721543 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.732466 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.743441 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.751538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.751588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.751601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.751623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.751636 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.758084 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.776426 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.815220 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:46Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.853765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.853800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.853809 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.853823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.853833 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.956119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.956165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.956176 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.956195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:46 crc kubenswrapper[4765]: I1203 20:38:46.956210 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:46Z","lastTransitionTime":"2025-12-03T20:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.058624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.058655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.058664 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.058677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.058685 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.161327 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.161413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.161436 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.161464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.161483 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.264185 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.264216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.264225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.264240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.264248 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.367329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.367363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.367371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.367384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.367393 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.469893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.470175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.470254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.470528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.470637 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.563411 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495" exitCode=0 Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.563538 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.570059 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.574251 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.574293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.574324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.574527 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.574538 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.586744 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.601118 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.613046 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.631067 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.647542 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.661668 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.672874 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.677114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.677162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.677178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.677199 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.677212 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.688255 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.700467 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.711900 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.724583 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.735149 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.750364 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.763805 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.775560 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:47Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.780143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.780181 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.780891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.780932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.780948 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.884520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.884578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.884599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.884630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.884654 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.988126 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.988169 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.988178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.988192 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:47 crc kubenswrapper[4765]: I1203 20:38:47.988202 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:47Z","lastTransitionTime":"2025-12-03T20:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077146 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.077115375 +0000 UTC m=+34.007660536 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.077158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.077511 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.077643 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.077713 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077795 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077817 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077818 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077915 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.077893787 +0000 UTC m=+34.008438978 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077908 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.078013 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.07799177 +0000 UTC m=+34.008536911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.077832 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.078067 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.078059322 +0000 UTC m=+34.008604593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.091772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.091831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.091855 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.091886 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.091908 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.179260 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.179552 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.179593 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.179613 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.179697 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.179674762 +0000 UTC m=+34.110219943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.195653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.195709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.195726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.195751 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.195769 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.299059 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.299121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.299142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.299174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.299198 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.359363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.359635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.359815 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.359870 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.360010 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:48 crc kubenswrapper[4765]: E1203 20:38:48.360194 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.402577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.402652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.402674 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.402704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.402726 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.505962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.506002 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.506011 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.506027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.506036 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.579958 4765 generic.go:334] "Generic (PLEG): container finished" podID="3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9" containerID="ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8" exitCode=0 Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.580039 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerDied","Data":"ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.604764 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.608922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.608961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.608971 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.608992 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.609006 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.630503 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.645363 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.665527 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.682770 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.707208 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.711684 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.711730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.711753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.711775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.711788 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.728413 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.741322 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.753901 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.765367 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.778538 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.798405 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.813616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.813642 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.813650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.813663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.813673 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.815453 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.836143 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.846237 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:48Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.916756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.916804 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.916821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.916839 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:48 crc kubenswrapper[4765]: I1203 20:38:48.916849 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:48Z","lastTransitionTime":"2025-12-03T20:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.022161 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.022226 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.022269 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.022337 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.022365 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.125588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.125647 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.125659 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.125677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.125690 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.228223 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.228271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.228284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.228324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.228340 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.330824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.330870 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.330886 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.330907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.330922 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.433998 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.434043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.434053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.434070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.434083 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.537032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.537094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.537105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.537125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.537137 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.588977 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" event={"ID":"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9","Type":"ContainerStarted","Data":"5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.594038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.594393 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.611867 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.625530 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.632105 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.640196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.640235 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.640246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.640266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.640283 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.644694 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.658592 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.671588 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.701394 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.717416 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.732704 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.744108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.744167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.744185 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.744211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.744231 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.746967 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.762394 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.777342 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.795987 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.815145 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.837349 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.847346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.847392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.847404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.847426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.847439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.854285 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.876088 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.891199 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.904552 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.918853 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.932156 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.949941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.949993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.950005 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.950024 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.950040 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:49Z","lastTransitionTime":"2025-12-03T20:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.952001 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.965498 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.979523 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:49 crc kubenswrapper[4765]: I1203 20:38:49.997101 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:49Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.013123 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.034380 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.049511 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.053150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.053216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.053236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.053261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.053281 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.062539 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.075093 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.092042 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.156493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.156564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.156586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.156620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.156643 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.260246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.260357 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.260380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.260412 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.260435 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.359635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.359635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:50 crc kubenswrapper[4765]: E1203 20:38:50.359875 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.359935 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:50 crc kubenswrapper[4765]: E1203 20:38:50.360067 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:50 crc kubenswrapper[4765]: E1203 20:38:50.360197 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.363861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.363940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.363970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.364006 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.364030 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.467588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.467650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.467667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.467692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.467714 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.570652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.570723 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.570746 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.570776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.570797 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.597163 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.598113 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.674547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.674615 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.674631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.674655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.674678 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.697268 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.717741 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.734535 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.750285 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.765276 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.778434 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.778526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.778549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.778584 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.778607 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.789426 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.808574 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.831481 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.852795 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.871183 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.881282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.881352 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.881368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.881387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.881401 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.901607 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.920544 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.935242 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.946326 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.954639 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.971581 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:50Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.984027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.984066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.984076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.984091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:50 crc kubenswrapper[4765]: I1203 20:38:50.984101 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:50Z","lastTransitionTime":"2025-12-03T20:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.086821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.086871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.086888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.086912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.086925 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.189459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.189501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.189510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.189525 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.189535 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.292644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.292946 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.293042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.293156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.293378 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.395734 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.395784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.395800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.395821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.395836 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.498419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.498457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.498467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.498482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.498490 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.599909 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.601577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.601771 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.601924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.602082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.602223 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.705781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.705864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.705882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.705910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.705938 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.809561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.809619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.809694 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.809718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.809736 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.911603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.911648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.911661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.911679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:51 crc kubenswrapper[4765]: I1203 20:38:51.911691 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:51Z","lastTransitionTime":"2025-12-03T20:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.015211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.015256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.015266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.015282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.015316 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.119112 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.119182 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.119198 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.119227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.119245 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.221576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.221633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.221651 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.221675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.221694 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.324792 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.324836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.324849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.324869 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.324880 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.359024 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.359158 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:52 crc kubenswrapper[4765]: E1203 20:38:52.359272 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.359517 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:52 crc kubenswrapper[4765]: E1203 20:38:52.359652 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:52 crc kubenswrapper[4765]: E1203 20:38:52.359790 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.375008 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.392193 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.406459 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.423746 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.428031 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.428067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.428076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.428095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.428105 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.447098 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.461052 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.479481 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.498732 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.522007 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.533643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.533724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.533748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.533780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.533803 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.557063 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.575229 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.590443 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.605467 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/0.log" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.609492 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d" exitCode=1 Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.609569 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.610996 4765 scope.go:117] "RemoveContainer" containerID="b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.620674 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.635289 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.636544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.636574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.636583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.636597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.636607 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.650046 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.665826 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.713387 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.730800 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.738490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.738544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.738560 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.738583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.738599 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.747691 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.763992 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.786243 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.800106 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.815041 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.828693 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.840881 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.841896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.841927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.841936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.841952 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.841961 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.857352 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.861171 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.873211 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.885526 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.900470 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.912088 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.924156 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.933890 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.944502 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.944569 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.944588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.944614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.944636 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:52Z","lastTransitionTime":"2025-12-03T20:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.947731 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.958834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.977808 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:52 crc kubenswrapper[4765]: I1203 20:38:52.993452 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.009133 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.023499 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.037798 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.047785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.047856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.047874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.047897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.047914 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.061999 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.074488 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.096894 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.109361 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.125007 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.138781 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.150774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.150832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.150843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.150865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.150877 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.254105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.254183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.254207 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.254239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.254262 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.357858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.357904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.357915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.357937 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.357949 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.460121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.460161 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.460171 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.460187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.460201 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.563258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.563363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.563384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.563414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.563437 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.600488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.600550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.600567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.600592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.600610 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.616843 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/0.log" Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.618669 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.622087 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.622230 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.624326 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.624365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.624381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.624402 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.624418 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.643726 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.646115 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.651349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.651405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.651420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.651441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.651458 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.661215 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.672924 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.683402 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.683676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.683787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.683919 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.684043 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.688326 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.703931 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.710038 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.714047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.714100 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.714121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.714147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.714163 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.724184 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.727756 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: E1203 20:38:53.727974 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.730218 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.730257 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.730270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.730287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.730316 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.748673 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.763049 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.780427 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.798791 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.825332 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.832608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.832646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.832658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.832673 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.832687 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.840427 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.857573 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.876247 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.890904 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.919926 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:53Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.935865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.935915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.935934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.935958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:53 crc kubenswrapper[4765]: I1203 20:38:53.935975 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:53Z","lastTransitionTime":"2025-12-03T20:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.039526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.039581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.039599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.039623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.039642 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.142981 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.143035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.143051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.143076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.143093 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.246346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.246417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.246440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.246470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.246492 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.349547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.349627 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.349652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.349680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.349698 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.358851 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.358898 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:54 crc kubenswrapper[4765]: E1203 20:38:54.359020 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.359117 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:54 crc kubenswrapper[4765]: E1203 20:38:54.359190 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:54 crc kubenswrapper[4765]: E1203 20:38:54.359351 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.453513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.453582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.453602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.453628 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.453646 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.484643 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb"] Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.485295 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.488881 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.489398 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.511288 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.543897 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.556783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.557050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.557240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.557555 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.557938 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.566023 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.574419 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.574653 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.574860 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89a3e7b-e621-4e0c-95be-586218807c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.575061 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffjk\" (UniqueName: \"kubernetes.io/projected/f89a3e7b-e621-4e0c-95be-586218807c8b-kube-api-access-tffjk\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.584520 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.600095 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.619224 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.629837 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.641677 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.662495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.662555 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.662573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.662601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.662619 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.665650 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.676530 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.676618 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89a3e7b-e621-4e0c-95be-586218807c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.676863 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.677794 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tffjk\" (UniqueName: \"kubernetes.io/projected/f89a3e7b-e621-4e0c-95be-586218807c8b-kube-api-access-tffjk\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.678010 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.678806 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f89a3e7b-e621-4e0c-95be-586218807c8b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.686355 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89a3e7b-e621-4e0c-95be-586218807c8b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.687859 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.709063 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.710237 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tffjk\" (UniqueName: \"kubernetes.io/projected/f89a3e7b-e621-4e0c-95be-586218807c8b-kube-api-access-tffjk\") pod \"ovnkube-control-plane-749d76644c-tblsb\" (UID: \"f89a3e7b-e621-4e0c-95be-586218807c8b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.733483 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.753080 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.765643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.765708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.765727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.765753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.765771 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.780691 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.799403 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.812726 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" Dec 03 20:38:54 crc kubenswrapper[4765]: W1203 20:38:54.832928 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89a3e7b_e621_4e0c_95be_586218807c8b.slice/crio-4ed28e9e8d3c0e222c0f91fe4770369f4603e5cde9046cf984c0a87f26f85358 WatchSource:0}: Error finding container 4ed28e9e8d3c0e222c0f91fe4770369f4603e5cde9046cf984c0a87f26f85358: Status 404 returned error can't find the container with id 4ed28e9e8d3c0e222c0f91fe4770369f4603e5cde9046cf984c0a87f26f85358 Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.837474 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.854167 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:54Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.868765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.868810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.868823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.868842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.868855 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.971714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.971771 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.971787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.971811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:54 crc kubenswrapper[4765]: I1203 20:38:54.971829 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:54Z","lastTransitionTime":"2025-12-03T20:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.074340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.074674 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.074693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.074722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.074742 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.177898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.177967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.177986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.178019 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.178038 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.281048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.281082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.281093 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.281110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.281119 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.383929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.383984 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.383994 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.384011 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.384023 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.486910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.486950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.486960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.486975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.486986 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.590034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.590077 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.590090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.590108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.590121 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.635529 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/1.log" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.636529 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/0.log" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.640207 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c" exitCode=1 Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.640318 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.640392 4765 scope.go:117] "RemoveContainer" containerID="b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.641472 4765 scope.go:117] "RemoveContainer" containerID="7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c" Dec 03 20:38:55 crc kubenswrapper[4765]: E1203 20:38:55.641693 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.644730 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" event={"ID":"f89a3e7b-e621-4e0c-95be-586218807c8b","Type":"ContainerStarted","Data":"f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.644792 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" event={"ID":"f89a3e7b-e621-4e0c-95be-586218807c8b","Type":"ContainerStarted","Data":"f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.644804 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" event={"ID":"f89a3e7b-e621-4e0c-95be-586218807c8b","Type":"ContainerStarted","Data":"4ed28e9e8d3c0e222c0f91fe4770369f4603e5cde9046cf984c0a87f26f85358"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.655867 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.668786 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.681471 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.692483 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.692532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.692548 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.692565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.692576 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.694171 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.706262 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.716825 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.733502 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.743911 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.756021 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.768874 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.784906 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.795204 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.795284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.795344 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.795381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.795407 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.802690 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.814754 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.826233 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.842265 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.854272 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.881067 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.896939 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.898550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.898615 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.898637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.898666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.898690 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:55Z","lastTransitionTime":"2025-12-03T20:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.913569 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.931015 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.945991 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.981754 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:55Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.988233 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9bhn8"] Dec 03 20:38:55 crc kubenswrapper[4765]: I1203 20:38:55.988975 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:55 crc kubenswrapper[4765]: E1203 20:38:55.989077 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.001893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.001957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.001982 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.002013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.002037 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.002340 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.024703 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.043956 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.061927 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.084885 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095510 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095616 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkj6\" (UniqueName: \"kubernetes.io/projected/d2670be8-9fe5-4210-ba7f-9538bbea79b8-kube-api-access-gqkj6\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095683 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.095711 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:39:12.095675833 +0000 UTC m=+50.026221014 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.095831 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095857 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.095904 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:12.095882049 +0000 UTC m=+50.026427240 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.095938 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.095984 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.096040 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:12.096026723 +0000 UTC m=+50.026571904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.096234 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.096283 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.096328 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.096492 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:12.096463656 +0000 UTC m=+50.027008837 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105014 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.105954 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.127232 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.142790 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.157820 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.169863 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.194011 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.197056 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkj6\" (UniqueName: \"kubernetes.io/projected/d2670be8-9fe5-4210-ba7f-9538bbea79b8-kube-api-access-gqkj6\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.197140 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.197209 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197366 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197431 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197460 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:56.697420689 +0000 UTC m=+34.627965870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197467 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197510 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.197582 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:12.197561473 +0000 UTC m=+50.128106664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.208031 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.208069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.208080 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.208099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.208115 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.211756 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.219405 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkj6\" (UniqueName: \"kubernetes.io/projected/d2670be8-9fe5-4210-ba7f-9538bbea79b8-kube-api-access-gqkj6\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.230839 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.240496 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.262342 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5a2f04c23d4bef11bf28126e97f93489d1d9db9c3bbc44bf2aa6a0f3b1dcc1d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:51Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:51.449431 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:51.449465 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:51.449479 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:51.449493 6076 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:51.449507 6076 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:38:51.449512 6076 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:51.449574 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:51.449584 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:51.449609 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:51.449621 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 20:38:51.449636 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:51.449722 6076 factory.go:656] Stopping watch factory\\\\nI1203 20:38:51.449739 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:51.449573 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:51.449749 6076 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:38:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.276350 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.291671 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.310699 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.310754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.310764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.310781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.310796 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.313362 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.328853 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.344217 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.359031 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.359102 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.359151 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.359213 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.359354 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.359483 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.361041 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.382916 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.397643 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413112 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413126 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413141 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413152 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.413643 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.418185 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.433452 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.450129 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.464149 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.515262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.515335 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.515350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.515369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.515381 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.618440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.618484 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.618494 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.618510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.618521 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.652425 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/1.log" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.658090 4765 scope.go:117] "RemoveContainer" containerID="7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.658393 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.682754 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.699072 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.702640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.702981 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: E1203 20:38:56.703098 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:57.703072519 +0000 UTC m=+35.633617680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.721794 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.721859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.721872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.721891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.721903 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.730438 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.745874 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.766819 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.784389 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.804236 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.825999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.826384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.826535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.826719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.826963 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.828729 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.848209 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.872331 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.892373 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.910020 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.924596 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.929645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.929693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.929707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.929726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.929739 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:56Z","lastTransitionTime":"2025-12-03T20:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.941061 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.954539 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.971965 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:56 crc kubenswrapper[4765]: I1203 20:38:56.986738 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:38:56Z is after 2025-08-24T17:21:41Z" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.031819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.031857 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.031871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.031888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.031899 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.134932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.135017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.135041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.135068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.135088 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.238189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.238263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.238322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.238371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.238389 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.341882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.341963 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.341979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.341999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.342022 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.359480 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:57 crc kubenswrapper[4765]: E1203 20:38:57.359699 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.444703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.444864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.444889 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.444945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.444966 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.547485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.547564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.547582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.547605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.547622 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.650811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.650865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.650882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.650906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.650924 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.713756 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:57 crc kubenswrapper[4765]: E1203 20:38:57.713981 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:57 crc kubenswrapper[4765]: E1203 20:38:57.714058 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:38:59.714034461 +0000 UTC m=+37.644579642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.754781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.754857 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.754880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.754915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.754936 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.858329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.858393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.858408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.858433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.858446 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.961788 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.961912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.961928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.961956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:57 crc kubenswrapper[4765]: I1203 20:38:57.961973 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:57Z","lastTransitionTime":"2025-12-03T20:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.066035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.066125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.066140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.066163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.066180 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.169672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.169734 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.169751 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.169774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.169796 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.273051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.273114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.273132 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.273156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.273176 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.359527 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.359527 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:38:58 crc kubenswrapper[4765]: E1203 20:38:58.360176 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.359611 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:38:58 crc kubenswrapper[4765]: E1203 20:38:58.360540 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:38:58 crc kubenswrapper[4765]: E1203 20:38:58.360527 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.376026 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.376282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.376408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.376523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.376609 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.479597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.479644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.479657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.479673 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.479683 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.582391 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.582473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.582489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.582541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.582560 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.684760 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.684840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.684866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.684898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.684923 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.788730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.788802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.788825 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.788853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.788877 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.892226 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.892347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.892366 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.892391 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.892411 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.995779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.995903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.995939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.995968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:58 crc kubenswrapper[4765]: I1203 20:38:58.995989 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:58Z","lastTransitionTime":"2025-12-03T20:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.099057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.099132 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.099150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.099174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.099193 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.202254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.202426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.202454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.202490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.202516 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.305943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.306004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.306031 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.306058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.306075 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.359492 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:59 crc kubenswrapper[4765]: E1203 20:38:59.359676 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.409385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.409434 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.409444 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.409464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.409476 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.511843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.511883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.511898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.511920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.511936 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.614841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.614897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.614914 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.614941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.614960 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.717589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.717629 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.717642 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.717660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.717673 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.734719 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:38:59 crc kubenswrapper[4765]: E1203 20:38:59.734863 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:59 crc kubenswrapper[4765]: E1203 20:38:59.734915 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:03.734901746 +0000 UTC m=+41.665446897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.819918 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.819954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.819964 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.819977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.819987 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.923003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.923060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.923076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.923097 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:38:59 crc kubenswrapper[4765]: I1203 20:38:59.923113 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:38:59Z","lastTransitionTime":"2025-12-03T20:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.026995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.027064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.027082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.027107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.027127 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.130095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.130180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.130205 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.130239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.130262 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.233225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.233280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.233329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.233363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.233393 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.336879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.336942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.336959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.336983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.337000 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.359601 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.359617 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:00 crc kubenswrapper[4765]: E1203 20:39:00.359759 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:00 crc kubenswrapper[4765]: E1203 20:39:00.359885 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.359628 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:00 crc kubenswrapper[4765]: E1203 20:39:00.360040 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.439383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.439440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.439459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.439484 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.439502 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.543287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.543377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.543396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.543425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.543444 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.646559 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.646606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.646622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.646638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.646648 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.749044 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.749109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.749131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.749162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.749175 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.851832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.851881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.851893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.851911 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.851923 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.954824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.954900 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.954917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.954942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:00 crc kubenswrapper[4765]: I1203 20:39:00.954961 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:00Z","lastTransitionTime":"2025-12-03T20:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.058062 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.058142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.058168 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.058195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.058215 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.161833 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.161907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.161926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.161953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.161976 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.265457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.265538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.265558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.265587 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.265605 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.358955 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:01 crc kubenswrapper[4765]: E1203 20:39:01.359204 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.368346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.368405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.368430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.368462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.368486 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.471175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.471245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.471265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.471290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.471337 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.575216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.575345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.575372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.575409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.575432 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.677743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.678087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.678168 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.678279 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.678400 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.781272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.781735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.781881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.782024 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.782156 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.885194 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.886014 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.886062 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.886103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.886128 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.990177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.990379 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.990413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.990448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:01 crc kubenswrapper[4765]: I1203 20:39:01.990473 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:01Z","lastTransitionTime":"2025-12-03T20:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.093277 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.093381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.093401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.093431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.093451 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.197035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.197101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.197117 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.197140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.197157 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.298977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.299055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.299078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.299105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.299124 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.359461 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.359570 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:02 crc kubenswrapper[4765]: E1203 20:39:02.359770 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.359881 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:02 crc kubenswrapper[4765]: E1203 20:39:02.360096 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:02 crc kubenswrapper[4765]: E1203 20:39:02.360232 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.386335 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.401732 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.401786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.401802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.401825 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.401841 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.405227 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.423398 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.435314 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.451990 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.467966 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.495767 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.504793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.504867 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.504892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.504922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.504945 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.510109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.526452 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.545126 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.557341 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.591400 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.607323 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.607364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.607381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.607405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.607419 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.611264 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.632745 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.652098 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.669589 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.682872 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:02Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.709707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.709753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.709764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.709782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.709795 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.812714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.813067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.813090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.813119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.813142 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.916466 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.916535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.916552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.916576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:02 crc kubenswrapper[4765]: I1203 20:39:02.916594 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:02Z","lastTransitionTime":"2025-12-03T20:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.020174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.020248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.020273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.020347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.020374 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.124032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.124121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.124138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.124164 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.124181 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.227144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.227216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.227234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.227258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.227273 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.330515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.330587 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.330607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.330633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.330650 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.358994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:03 crc kubenswrapper[4765]: E1203 20:39:03.359186 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.433862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.433940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.433967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.433996 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.434017 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.536879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.536936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.536956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.536980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.536997 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.641663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.641730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.641746 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.641772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.641793 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.745532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.745624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.745649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.745688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.745711 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.781883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:03 crc kubenswrapper[4765]: E1203 20:39:03.782082 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:03 crc kubenswrapper[4765]: E1203 20:39:03.782188 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:11.782159981 +0000 UTC m=+49.712705172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.849414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.849481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.849499 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.849523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.849542 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.952644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.952722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.952745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.952778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:03 crc kubenswrapper[4765]: I1203 20:39:03.952801 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:03Z","lastTransitionTime":"2025-12-03T20:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.056033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.056104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.056122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.056146 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.056164 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.090051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.090130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.090157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.090190 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.090216 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.111629 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:04Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.116716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.116785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.116811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.116839 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.116861 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.136921 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:04Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.142260 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.142356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.142379 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.142405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.142425 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.162808 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:04Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.167647 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.167743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.167767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.167800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.167826 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.189651 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:04Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.195744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.195858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.195883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.195956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.195980 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.217373 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:04Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.217510 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.219452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.219532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.219555 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.219580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.219600 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.322395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.322460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.322482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.322506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.322523 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.359498 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.359578 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.359514 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.359683 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.359884 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:04 crc kubenswrapper[4765]: E1203 20:39:04.359995 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.426003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.426085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.426108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.426137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.426155 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.529479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.529546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.529563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.529588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.529626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.632545 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.632595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.632606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.632626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.632639 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.735614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.735681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.735698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.735730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.735750 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.838747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.838810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.838829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.838858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.838877 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.942056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.942114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.942131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.942158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:04 crc kubenswrapper[4765]: I1203 20:39:04.942173 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:04Z","lastTransitionTime":"2025-12-03T20:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.047230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.047321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.047341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.047367 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.047384 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.150661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.150757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.150775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.150803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.150820 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.254372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.254404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.254414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.254430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.254443 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.356931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.357005 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.357030 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.357061 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.357086 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.359248 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:05 crc kubenswrapper[4765]: E1203 20:39:05.359414 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.460174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.460216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.460227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.460243 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.460254 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.563951 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.564017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.564037 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.564064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.564084 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.667952 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.668041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.668061 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.668090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.668115 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.772180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.772248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.772266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.772293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.772340 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.875535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.875597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.875616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.875641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.875658 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.979429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.979476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.979485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.979503 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:05 crc kubenswrapper[4765]: I1203 20:39:05.979514 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:05Z","lastTransitionTime":"2025-12-03T20:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.082666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.082711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.082721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.082737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.082751 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.186463 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.186527 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.186544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.186567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.186588 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.290153 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.290213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.290233 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.290258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.290275 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.359170 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.359219 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:06 crc kubenswrapper[4765]: E1203 20:39:06.359411 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.359448 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:06 crc kubenswrapper[4765]: E1203 20:39:06.359652 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:06 crc kubenswrapper[4765]: E1203 20:39:06.359849 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.393389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.393440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.393460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.393482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.393499 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.496042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.496116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.496140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.496167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.496184 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.599050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.599102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.599119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.599144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.599161 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.701440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.701490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.701507 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.701531 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.701548 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.804831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.804879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.804892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.804915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.804928 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.907646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.907710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.907727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.907752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:06 crc kubenswrapper[4765]: I1203 20:39:06.907777 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:06Z","lastTransitionTime":"2025-12-03T20:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.010874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.010992 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.011018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.011052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.011076 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.114587 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.114647 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.114658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.114677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.114689 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.217622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.217667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.217677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.217693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.217706 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.321849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.321897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.321914 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.321939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.321957 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.359707 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:07 crc kubenswrapper[4765]: E1203 20:39:07.359923 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.425506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.425576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.425599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.425636 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.425657 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.528396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.528458 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.528477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.528502 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.528519 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.631410 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.631470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.631492 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.631514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.631535 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.735451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.735516 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.735533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.735558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.735576 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.838400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.838459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.838476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.838501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.838519 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.941849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.941902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.941913 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.941931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:07 crc kubenswrapper[4765]: I1203 20:39:07.941945 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:07Z","lastTransitionTime":"2025-12-03T20:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.045289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.045379 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.045398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.045422 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.045442 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.148569 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.148636 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.148658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.148690 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.148713 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.251676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.251713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.251724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.251742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.251754 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.354332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.354374 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.354390 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.354408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.354420 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.358768 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.358781 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.358852 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:08 crc kubenswrapper[4765]: E1203 20:39:08.358974 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:08 crc kubenswrapper[4765]: E1203 20:39:08.359239 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:08 crc kubenswrapper[4765]: E1203 20:39:08.359743 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.360229 4765 scope.go:117] "RemoveContainer" containerID="7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.457719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.457787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.457808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.457834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.457854 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.560573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.560619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.560634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.560653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.560666 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.663568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.663614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.663630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.663650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.663664 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.705338 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/1.log" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.708904 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.709757 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.733357 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.751423 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.766874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.766916 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.766926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.766953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.766966 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.773551 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.795412 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.827459 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.845018 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.858667 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.869055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.869122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.869137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.869157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.869173 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.874926 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.891362 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.902559 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.925191 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.936649 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.952637 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.971518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.971902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.972021 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.972141 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.972265 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:08Z","lastTransitionTime":"2025-12-03T20:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.976068 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:08 crc kubenswrapper[4765]: I1203 20:39:08.988575 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:08Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.015024 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.031149 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.076436 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.076493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.076510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.076533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.076550 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.178722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.178836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.178862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.178894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.178922 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.281522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.281590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.281607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.281659 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.281682 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.358882 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:09 crc kubenswrapper[4765]: E1203 20:39:09.358992 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.384136 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.384191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.384203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.384221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.384234 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.487082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.487148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.487178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.487211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.487234 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.590258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.590285 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.590313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.590325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.590334 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.693917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.693983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.694000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.694027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.694048 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.715860 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/2.log" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.716750 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/1.log" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.720852 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" exitCode=1 Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.720897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.721051 4765 scope.go:117] "RemoveContainer" containerID="7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.721970 4765 scope.go:117] "RemoveContainer" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" Dec 03 20:39:09 crc kubenswrapper[4765]: E1203 20:39:09.722232 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.757636 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.778575 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797504 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.797938 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.814151 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.827599 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.860855 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7156c30b7ebf53624aef49175fa52b7f5598134bdd5735322f4f5f009850fe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"message\\\":\\\" handler 1 for removal\\\\nI1203 20:38:53.763000 6205 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:38:53.763025 6205 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:38:53.763073 6205 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:38:53.763102 6205 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:38:53.763106 6205 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:38:53.763107 6205 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 20:38:53.763115 6205 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:38:53.763149 6205 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:38:53.763170 6205 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:38:53.763124 6205 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:38:53.763169 6205 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 20:38:53.763185 6205 factory.go:656] Stopping watch factory\\\\nI1203 20:38:53.763196 6205 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:38:53.763131 6205 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:38:53.763254 6205 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.879794 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.900797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.900863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.900718 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.900881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.901172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.901211 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:09Z","lastTransitionTime":"2025-12-03T20:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.920634 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.943892 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.961107 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:09 crc kubenswrapper[4765]: I1203 20:39:09.983685 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:09Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.003373 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.004212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.004261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.004273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.004318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.004337 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.024865 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.042820 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.064366 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.083488 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.107229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.107335 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.107359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.107386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.107405 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.210811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.210888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.210907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.210936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.210956 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.314537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.314592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.314609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.314632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.314650 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.359093 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.359268 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.359607 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:10 crc kubenswrapper[4765]: E1203 20:39:10.359527 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:10 crc kubenswrapper[4765]: E1203 20:39:10.359914 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:10 crc kubenswrapper[4765]: E1203 20:39:10.360695 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.417491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.417522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.417530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.417544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.417552 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.519833 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.519866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.519875 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.519888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.519896 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.622608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.622688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.622704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.622727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.622742 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.725925 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/2.log" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.726070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.726107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.726115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.726129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.726139 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.729404 4765 scope.go:117] "RemoveContainer" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" Dec 03 20:39:10 crc kubenswrapper[4765]: E1203 20:39:10.729535 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.743072 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.757921 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.770540 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.783221 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.800109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.812606 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.828613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.828672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.828690 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.828715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.828733 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.842316 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.856373 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.870157 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.883416 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.893666 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.915590 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.927541 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.931294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.931382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.931397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.931418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.931435 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:10Z","lastTransitionTime":"2025-12-03T20:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.940872 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.953206 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.967066 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:10 crc kubenswrapper[4765]: I1203 20:39:10.979439 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:10Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.033775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.033815 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.033830 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.033851 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.033867 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.136923 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.136964 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.136978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.136993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.137004 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.239757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.239839 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.239865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.239897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.239921 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.341819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.341858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.341866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.341878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.341887 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.359336 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:11 crc kubenswrapper[4765]: E1203 20:39:11.359461 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.445372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.445435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.445455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.445480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.445505 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.547630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.547668 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.547676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.547689 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.547698 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.650150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.650229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.650251 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.650279 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.650346 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.753289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.753811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.754076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.754284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.754515 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.857273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.857389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.857407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.857426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.857439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.879162 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:11 crc kubenswrapper[4765]: E1203 20:39:11.879393 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:11 crc kubenswrapper[4765]: E1203 20:39:11.879505 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:27.879477212 +0000 UTC m=+65.810022413 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.959792 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.959845 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.959857 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.959875 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:11 crc kubenswrapper[4765]: I1203 20:39:11.959887 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:11Z","lastTransitionTime":"2025-12-03T20:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.062617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.062874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.062941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.063013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.063079 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.165414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.165763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.165828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.165887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.165947 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.181980 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.182166 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.182221 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.182333 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182384 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182431 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182494 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182514 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182455 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:44.182437044 +0000 UTC m=+82.112982205 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182456 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182628 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:44.182596299 +0000 UTC m=+82.113141450 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182685 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:44.18265296 +0000 UTC m=+82.113198171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.182994 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:39:44.182979619 +0000 UTC m=+82.113524780 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.268523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.268831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.268840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.268854 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.268862 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.283177 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.283471 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.283547 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.283565 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.283665 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:44.283638114 +0000 UTC m=+82.214183265 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.359331 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.359331 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.359530 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.359569 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.359669 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:12 crc kubenswrapper[4765]: E1203 20:39:12.359790 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.370995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.371038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.371052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.371068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.371080 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.371834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.398681 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.414647 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.432019 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.452225 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.466504 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.474010 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.474060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.474072 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.474092 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.474105 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.481674 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.494932 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.509820 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.521496 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.532871 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.546996 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.556179 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.568255 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.575748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.575782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.575790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.575803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.575812 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.581511 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.601762 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.615192 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:12Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.679554 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.679592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.679601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.679614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.679622 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.781950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.781987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.781997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.782017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.782028 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.884276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.884384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.884411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.884475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.884501 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.987869 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.987925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.987942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.987967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:12 crc kubenswrapper[4765]: I1203 20:39:12.987984 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:12Z","lastTransitionTime":"2025-12-03T20:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.090849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.090905 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.090923 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.090946 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.090964 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.194542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.194602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.194621 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.194645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.194664 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.297091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.297163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.297189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.297487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.297517 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.359177 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:13 crc kubenswrapper[4765]: E1203 20:39:13.359396 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.400623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.400698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.400719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.400742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.400761 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.504356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.504434 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.504457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.504490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.504517 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.607125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.607180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.607196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.607219 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.607236 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.709880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.709926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.709943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.709968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.709985 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.812725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.812780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.812797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.812820 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.812861 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.915903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.915952 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.915966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.915983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:13 crc kubenswrapper[4765]: I1203 20:39:13.915996 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:13Z","lastTransitionTime":"2025-12-03T20:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.018768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.018827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.018848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.018875 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.018895 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.121244 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.121358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.121376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.121400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.121418 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.223995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.224053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.224069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.224090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.224105 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.327074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.327950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.328151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.328430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.328619 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.358830 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.358970 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.358834 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.359026 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.359089 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.359247 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.432157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.432200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.432212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.432229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.432241 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.535594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.535663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.535680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.535707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.535726 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.571022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.571089 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.571105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.571129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.571147 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.592566 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:14Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.598478 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.598539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.598563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.598591 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.598614 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.620502 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:14Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.626200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.626273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.626441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.626532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.626558 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.647782 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:14Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.653220 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.653280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.653293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.653341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.653355 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.673010 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:14Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.677458 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.677524 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.677541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.677563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.677580 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.697006 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:14Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:14 crc kubenswrapper[4765]: E1203 20:39:14.697173 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.698930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.698999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.699018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.699042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.699060 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.801723 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.801791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.801816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.801849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.801873 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.904775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.904849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.904865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.904928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:14 crc kubenswrapper[4765]: I1203 20:39:14.904950 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:14Z","lastTransitionTime":"2025-12-03T20:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.007746 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.007827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.007853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.007882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.007901 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.110693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.110779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.110803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.110835 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.110858 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.213265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.213339 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.213351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.213368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.213379 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.316134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.316237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.316260 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.316288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.316353 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.359327 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:15 crc kubenswrapper[4765]: E1203 20:39:15.359819 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.419182 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.419226 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.419239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.419259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.419272 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.521230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.521265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.521277 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.521328 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.521342 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.624047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.624103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.624115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.624128 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.624136 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.726196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.726291 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.726356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.726383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.726413 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.828773 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.828846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.828870 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.828899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.828916 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.932162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.932221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.932240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.932264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:15 crc kubenswrapper[4765]: I1203 20:39:15.932282 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:15Z","lastTransitionTime":"2025-12-03T20:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.035594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.035657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.035675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.035696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.035713 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.138327 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.138388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.138413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.138445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.138468 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.240958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.240997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.241013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.241034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.241050 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.343947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.344014 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.344031 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.344053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.344083 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.359793 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.359872 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:16 crc kubenswrapper[4765]: E1203 20:39:16.360036 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.360114 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:16 crc kubenswrapper[4765]: E1203 20:39:16.360372 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:16 crc kubenswrapper[4765]: E1203 20:39:16.360570 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.447092 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.447127 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.447137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.447150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.447159 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.550068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.550170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.550627 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.550713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.550997 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.654593 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.654657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.654676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.654702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.654724 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.757671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.757729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.757747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.757774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.757792 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.806810 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.823712 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.828876 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.850993 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.860273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.860348 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.860368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.860398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.860417 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.866587 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.900287 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.912416 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.924102 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.933547 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.946286 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.956832 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.963568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.963605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.963618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.963635 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.963648 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:16Z","lastTransitionTime":"2025-12-03T20:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.970750 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.981167 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:16 crc kubenswrapper[4765]: I1203 20:39:16.993532 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:16Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.006733 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:17Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.028057 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:17Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.045748 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:17Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.065594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.065624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.065632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.065644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.065654 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.072050 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:17Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.085229 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:17Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.168557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.168616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.168632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.168657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.168676 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.271681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.271732 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.271748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.271770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.271787 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.359009 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:17 crc kubenswrapper[4765]: E1203 20:39:17.359160 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.375079 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.375127 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.375141 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.375162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.375177 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.477975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.478026 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.478038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.478057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.478069 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.580630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.580691 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.580707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.580731 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.580748 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.682604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.682665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.682682 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.682706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.682723 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.785966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.786035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.786058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.786089 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.786117 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.888514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.888594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.888621 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.888650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.888674 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.991681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.991736 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.991748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.991768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:17 crc kubenswrapper[4765]: I1203 20:39:17.991781 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:17Z","lastTransitionTime":"2025-12-03T20:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.094626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.094674 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.094687 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.094705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.094717 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.197073 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.197134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.197147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.197165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.197178 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.300204 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.300242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.300251 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.300266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.300276 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.359454 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.359483 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:18 crc kubenswrapper[4765]: E1203 20:39:18.359690 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.359760 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:18 crc kubenswrapper[4765]: E1203 20:39:18.359946 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:18 crc kubenswrapper[4765]: E1203 20:39:18.360150 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.402650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.402700 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.402711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.402727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.402746 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.505119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.505165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.505179 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.505196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.505208 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.607733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.607776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.607787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.607801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.607811 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.710380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.710439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.710457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.710480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.710497 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.813420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.813475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.813492 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.813518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.813536 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.916356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.916422 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.916441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.916464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:18 crc kubenswrapper[4765]: I1203 20:39:18.916482 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:18Z","lastTransitionTime":"2025-12-03T20:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.019915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.019990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.020009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.020034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.020051 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.123416 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.123482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.123510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.123542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.123565 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.226343 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.226392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.226419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.226439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.226452 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.328504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.328566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.328585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.328607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.328623 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.358847 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:19 crc kubenswrapper[4765]: E1203 20:39:19.358959 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.431282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.431342 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.431354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.431369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.431380 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.533370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.533437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.533459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.533488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.533551 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.637538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.637603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.637625 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.637653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.637674 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.740671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.740727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.740744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.740769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.740787 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.844103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.844180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.844205 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.844236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.844257 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.946926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.947000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.947028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.947057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:19 crc kubenswrapper[4765]: I1203 20:39:19.947079 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:19Z","lastTransitionTime":"2025-12-03T20:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.050474 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.050550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.050572 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.050603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.050626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.154015 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.154087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.154104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.154129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.154146 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.256624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.256678 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.256695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.256714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.256728 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.358930 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.358930 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359329 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359463 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.359495 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: E1203 20:39:20.359538 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:20 crc kubenswrapper[4765]: E1203 20:39:20.359722 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:20 crc kubenswrapper[4765]: E1203 20:39:20.359855 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.462497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.462550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.462566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.462588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.462606 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.566050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.566114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.566130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.566156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.566174 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.669419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.669517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.669537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.669561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.669579 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.772884 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.772977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.772995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.773022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.773038 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.876687 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.876772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.876799 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.876832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.876857 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.979973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.980060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.980077 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.980109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:20 crc kubenswrapper[4765]: I1203 20:39:20.980127 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:20Z","lastTransitionTime":"2025-12-03T20:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.083178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.083250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.083275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.083342 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.083368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.186654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.186954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.187003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.187038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.187058 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.290413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.290530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.290549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.290576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.290593 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.359109 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:21 crc kubenswrapper[4765]: E1203 20:39:21.359914 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.360423 4765 scope.go:117] "RemoveContainer" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" Dec 03 20:39:21 crc kubenswrapper[4765]: E1203 20:39:21.360901 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.393806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.393861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.393879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.393901 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.393918 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.497332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.497392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.497414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.497437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.497453 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.600321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.600364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.600379 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.600396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.600408 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.703157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.703231 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.703256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.703287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.703348 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.806230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.806330 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.806351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.806377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.806396 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.909747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.909831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.909862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.909894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:21 crc kubenswrapper[4765]: I1203 20:39:21.909914 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:21Z","lastTransitionTime":"2025-12-03T20:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.013116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.013184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.013201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.013225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.013243 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.115781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.115851 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.115873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.115902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.115920 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.219263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.219376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.219401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.219431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.219454 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.322509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.322908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.322926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.322952 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.322969 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.359632 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:22 crc kubenswrapper[4765]: E1203 20:39:22.359836 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.360014 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.360101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:22 crc kubenswrapper[4765]: E1203 20:39:22.360345 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:22 crc kubenswrapper[4765]: E1203 20:39:22.360504 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.395109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.413471 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.426039 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.426112 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.426126 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.426143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.426156 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.428407 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.453236 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.463931 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.479397 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.498474 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.512693 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.529771 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.529832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.529847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.529881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.529897 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.530050 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.544237 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.560564 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.577690 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.593288 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.615933 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.633852 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.633935 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.633954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.634009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.634025 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.642014 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.661360 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.676953 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.697597 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:22Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.736843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.736879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.736890 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.736905 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.736916 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.838762 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.838810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.838821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.838838 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.838850 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.941472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.941528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.941546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.941571 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:22 crc kubenswrapper[4765]: I1203 20:39:22.941589 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:22Z","lastTransitionTime":"2025-12-03T20:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.045177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.045266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.045291 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.045359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.045387 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.148439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.148506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.148530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.148563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.148587 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.250563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.250617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.250632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.250649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.250658 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.353582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.353671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.353693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.353724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.353746 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.359069 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:23 crc kubenswrapper[4765]: E1203 20:39:23.359215 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.456009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.456049 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.456058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.456075 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.456086 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.559281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.559373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.559385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.559407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.559423 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.662498 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.662585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.662631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.662658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.662677 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.766741 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.766875 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.766903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.766934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.766961 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.870575 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.870704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.870729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.870758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.870776 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.973981 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.974028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.974041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.974057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:23 crc kubenswrapper[4765]: I1203 20:39:23.974069 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:23Z","lastTransitionTime":"2025-12-03T20:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.076720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.076762 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.076772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.076792 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.076803 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.180631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.180709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.180733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.180763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.180781 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.283167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.283223 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.283238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.283259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.283276 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.359670 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.359707 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.359935 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.360149 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.360353 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.360813 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.391709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.391937 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.392118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.392377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.392531 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.494612 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.495042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.495200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.495369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.495529 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.599786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.600861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.601057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.601841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.602168 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.706269 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.706386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.706405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.706469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.706490 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.809254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.809401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.809423 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.809451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.809472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.913854 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.913915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.913931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.913953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.913970 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.930738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.930880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.930908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.930941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.930964 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.952090 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:24Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.957090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.957147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.957165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.957189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.957205 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.978085 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:24Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.983242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.983365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.983385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.983414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:24 crc kubenswrapper[4765]: I1203 20:39:24.983438 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:24Z","lastTransitionTime":"2025-12-03T20:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:24 crc kubenswrapper[4765]: E1203 20:39:24.997809 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:24Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.001832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.001920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.001933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.001985 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.002002 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: E1203 20:39:25.019351 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:25Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.024149 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.024358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.024378 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.024400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.024417 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: E1203 20:39:25.049000 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:25Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:25 crc kubenswrapper[4765]: E1203 20:39:25.049948 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.057443 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.057514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.057538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.057562 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.057578 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.160942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.160999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.161009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.161032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.161045 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.264462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.264540 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.264561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.264588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.264606 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.358949 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:25 crc kubenswrapper[4765]: E1203 20:39:25.359128 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.367840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.367901 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.367922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.367939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.367951 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.471650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.471773 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.471800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.471834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.471858 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.574827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.574897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.574915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.574940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.574959 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.677645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.678388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.678430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.678449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.678466 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.781447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.781488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.781500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.781517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.781528 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.883958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.883998 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.884010 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.884026 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.884037 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.986891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.986921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.986928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.986941 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:25 crc kubenswrapper[4765]: I1203 20:39:25.986950 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:25Z","lastTransitionTime":"2025-12-03T20:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.089997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.090834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.090897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.090939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.090955 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.194361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.194412 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.194426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.194444 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.194457 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.297159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.297207 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.297218 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.297234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.297245 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.359575 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:26 crc kubenswrapper[4765]: E1203 20:39:26.359766 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.360116 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:26 crc kubenswrapper[4765]: E1203 20:39:26.360242 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.360668 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:26 crc kubenswrapper[4765]: E1203 20:39:26.360793 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.400449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.400508 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.400526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.400551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.400564 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.504426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.504810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.504945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.505088 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.505269 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.608066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.608128 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.608144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.608167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.608185 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.711069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.711110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.711120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.711137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.711149 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.814111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.814180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.814198 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.814223 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.814243 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.916681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.916735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.916769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.916788 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:26 crc kubenswrapper[4765]: I1203 20:39:26.916804 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:26Z","lastTransitionTime":"2025-12-03T20:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.019398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.019558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.019581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.019606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.019629 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.122706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.122789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.122803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.122822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.122840 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.226892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.226977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.226996 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.227025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.227043 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.330460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.330542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.330564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.330595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.330617 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.359861 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:27 crc kubenswrapper[4765]: E1203 20:39:27.360139 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.433563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.433644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.433675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.433713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.433736 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.536578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.536620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.536628 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.536642 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.536653 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.639611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.639681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.639703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.639726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.639743 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.742654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.742694 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.742703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.742718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.742727 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.844858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.844922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.844948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.844976 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.845000 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.947829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.947891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.947909 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.947931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.947947 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:27Z","lastTransitionTime":"2025-12-03T20:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:27 crc kubenswrapper[4765]: I1203 20:39:27.962131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:27 crc kubenswrapper[4765]: E1203 20:39:27.962391 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:27 crc kubenswrapper[4765]: E1203 20:39:27.962518 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:39:59.962479932 +0000 UTC m=+97.893025143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.050122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.050184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.050196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.050215 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.050227 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.154040 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.154447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.154461 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.154480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.154492 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.257709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.257765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.257782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.257804 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.257818 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.359041 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.359116 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:28 crc kubenswrapper[4765]: E1203 20:39:28.359152 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:28 crc kubenswrapper[4765]: E1203 20:39:28.359265 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.359353 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:28 crc kubenswrapper[4765]: E1203 20:39:28.359567 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.361025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.361046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.361054 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.361067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.361077 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.463345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.463411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.463429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.463454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.463471 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.566479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.566517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.566528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.566543 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.566553 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.668538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.668601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.668626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.668651 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.668669 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.770797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.770858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.770877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.770940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.770958 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.785746 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/0.log" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.785797 4765 generic.go:334] "Generic (PLEG): container finished" podID="2d91ef96-b0c9-43eb-8d49-e522199942c9" containerID="20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3" exitCode=1 Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.785848 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerDied","Data":"20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.786520 4765 scope.go:117] "RemoveContainer" containerID="20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.804795 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.820157 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.835043 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.848463 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.861043 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.872970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.873015 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.873032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.873054 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.873072 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.873911 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.887893 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.898537 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.911207 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.923277 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.940249 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.964930 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.976033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.976103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.976120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.976602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.976659 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:28Z","lastTransitionTime":"2025-12-03T20:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.978905 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:28 crc kubenswrapper[4765]: I1203 20:39:28.990853 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:28Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.005498 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.017852 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.034772 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.043961 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.079170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.079196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.079203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.079215 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.079223 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.180898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.181048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.181148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.181239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.181341 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.283382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.283426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.283435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.283448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.283457 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.358907 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:29 crc kubenswrapper[4765]: E1203 20:39:29.359078 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.385649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.385715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.385735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.385763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.385780 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.488730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.488778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.488795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.488816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.488832 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.591249 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.591315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.591329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.591344 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.591354 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.695329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.695589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.695714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.695811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.695889 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.791858 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/0.log" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.791969 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerStarted","Data":"0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.797816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.797871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.797887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.797912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.797929 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.809125 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.826385 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.844277 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.857512 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.870091 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.889654 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900478 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:29Z","lastTransitionTime":"2025-12-03T20:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.900703 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.922134 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.934195 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.945706 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.956479 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.966326 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:29 crc kubenswrapper[4765]: I1203 20:39:29.988491 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.000142 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:29Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.003096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.003734 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.003754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.003776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.003793 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.013150 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:30Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.024550 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:30Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.035952 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:30Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.045752 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:30Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.106358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.106421 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.106459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.106491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.106512 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.210003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.210090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.210113 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.210142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.210164 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.313204 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.313268 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.313290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.313350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.313375 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.359532 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.359539 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.359642 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:30 crc kubenswrapper[4765]: E1203 20:39:30.359790 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:30 crc kubenswrapper[4765]: E1203 20:39:30.359856 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:30 crc kubenswrapper[4765]: E1203 20:39:30.359917 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.415765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.415800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.415808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.415821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.415830 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.518601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.518639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.518647 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.518661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.518670 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.621240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.621362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.621383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.621431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.621450 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.723548 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.723616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.723636 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.723661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.723679 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.826726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.826804 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.826829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.826862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.826886 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.929912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.929957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.929969 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.929986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:30 crc kubenswrapper[4765]: I1203 20:39:30.929997 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:30Z","lastTransitionTime":"2025-12-03T20:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.032939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.033052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.033077 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.033108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.033129 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.136843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.136930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.136944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.136961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.136973 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.242355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.242468 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.242495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.242539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.242574 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.345699 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.345730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.345756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.345770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.345779 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.359700 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:31 crc kubenswrapper[4765]: E1203 20:39:31.359805 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.449335 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.449715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.449874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.449982 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.450059 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.552422 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.552782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.553012 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.553212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.553399 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.655722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.655767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.655782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.655802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.655818 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.758722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.758768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.758784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.758806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.758824 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.861183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.861225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.861235 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.861253 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.861263 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.963514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.963561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.963570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.963589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:31 crc kubenswrapper[4765]: I1203 20:39:31.963601 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:31Z","lastTransitionTime":"2025-12-03T20:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.065566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.065596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.065604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.065617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.065626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.167842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.167893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.167908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.167928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.167942 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.270220 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.270288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.270359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.270395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.270419 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.359495 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.359542 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:32 crc kubenswrapper[4765]: E1203 20:39:32.359717 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.359733 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:32 crc kubenswrapper[4765]: E1203 20:39:32.359826 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:32 crc kubenswrapper[4765]: E1203 20:39:32.359889 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.373016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.373070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.373086 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.373110 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.373125 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.385433 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.398241 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.409811 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.420054 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.430175 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.453528 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.463865 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.475183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.475232 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.475244 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.475261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.475273 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.476904 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.487834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.498677 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.508373 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.522410 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.533272 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.543910 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.555962 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.565946 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.578157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.578192 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.578211 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.578228 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.578239 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.579666 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.593769 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:32Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.680661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.680707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.680722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.680742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.680757 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.782898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.782948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.782965 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.782987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.783004 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.885210 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.885273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.885290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.885366 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.885384 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.988648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.988692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.988709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.988730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:32 crc kubenswrapper[4765]: I1203 20:39:32.988745 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:32Z","lastTransitionTime":"2025-12-03T20:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.091677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.091738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.091750 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.091766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.091776 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.193983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.194034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.194044 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.194057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.194093 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.297056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.297127 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.297147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.297174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.297191 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.359170 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:33 crc kubenswrapper[4765]: E1203 20:39:33.359394 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.360479 4765 scope.go:117] "RemoveContainer" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.399373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.399407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.399418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.399435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.399447 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.502386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.502436 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.502450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.502470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.502484 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.604439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.604484 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.604496 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.604513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.604524 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.706252 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.706340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.706356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.706378 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.706394 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.809225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.809266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.809276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.809290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.809314 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.837987 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/2.log" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.840461 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.840915 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.868337 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.885940 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.898510 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.908336 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.911289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.911334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.911346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.911359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.911368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:33Z","lastTransitionTime":"2025-12-03T20:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.918690 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.935974 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.947830 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.958429 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.977450 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:33 crc kubenswrapper[4765]: I1203 20:39:33.989047 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:33Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.001746 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.013920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.013947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.013958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.013972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.013983 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.014070 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.028156 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.041195 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.053182 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.070511 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.080691 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.092962 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.116447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.116500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.116516 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.116539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.116556 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.218237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.218276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.218288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.218325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.218339 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.321234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.321276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.321288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.321336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.321346 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.359123 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.359165 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.359227 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:34 crc kubenswrapper[4765]: E1203 20:39:34.359259 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:34 crc kubenswrapper[4765]: E1203 20:39:34.359477 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:34 crc kubenswrapper[4765]: E1203 20:39:34.359580 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.424142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.424213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.424238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.424267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.424288 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.527370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.527451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.527462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.527475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.527483 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.630047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.630090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.630103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.630121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.630132 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.733610 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.733663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.733680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.733703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.733721 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.836399 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.836454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.836470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.836491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.836508 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.851769 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/3.log" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.852893 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/2.log" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.856766 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" exitCode=1 Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.856830 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.856886 4765 scope.go:117] "RemoveContainer" containerID="00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.858409 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:39:34 crc kubenswrapper[4765]: E1203 20:39:34.858653 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.872971 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.893529 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.909968 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.924227 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.938878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.939160 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.939259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.939384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.939484 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:34Z","lastTransitionTime":"2025-12-03T20:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.940710 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.951593 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.965107 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.984811 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:34 crc kubenswrapper[4765]: I1203 20:39:34.995952 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:34Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.007072 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.019322 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.028878 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.041638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.041665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.041676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.041692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.041706 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.046847 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00b15358c36c8e301ef9ea302978a9c17b9c81715027151b85eb09f71f6b5911\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:09Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 20:39:09.296486 6415 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 20:39:09.296539 6415 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 20:39:09.296548 6415 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 20:39:09.296569 6415 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 20:39:09.296581 6415 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 20:39:09.296599 6415 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 20:39:09.296627 6415 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 20:39:09.296634 6415 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 20:39:09.296638 6415 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 20:39:09.296654 6415 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 20:39:09.296669 6415 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 20:39:09.296676 6415 factory.go:656] Stopping watch factory\\\\nI1203 20:39:09.296680 6415 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 20:39:09.296693 6415 ovnkube.go:599] Stopped ovnkube\\\\nI1203 20:39:09.296720 6415 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:34Z\\\",\\\"message\\\":\\\"UIDName:}]\\\\nI1203 20:39:34.215970 6763 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 20:39:34.216193 6763 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1203 20:39:34.216202 6763 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 20:39:34.216235 6763 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.056362 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.070727 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.083433 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.096076 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.104602 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.144429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.144495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.144515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.144597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.144616 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.248055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.248376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.248387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.248431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.248443 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.352183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.352245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.352262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.352286 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.352334 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.353858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.353898 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.353918 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.353942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.353967 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.359101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.359337 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.372332 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.376835 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.376905 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.376931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.376962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.376984 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.401562 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.411256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.411350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.411371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.411394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.411412 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.430713 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.435321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.435344 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.435351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.435363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.435373 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.446514 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.452616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.452704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.452725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.452751 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.452770 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.472526 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.472838 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.477589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.477662 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.477686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.477713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.477736 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.580357 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.580403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.580417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.580435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.580451 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.682806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.682846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.682858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.682876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.682889 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.786040 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.786400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.786561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.786758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.786986 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.862534 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/3.log" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.866796 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:39:35 crc kubenswrapper[4765]: E1203 20:39:35.867395 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.883468 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.889680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.889710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.889721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.889738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.889749 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.897131 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.911276 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.925552 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.937269 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.951415 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.966876 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.978241 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992689 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992716 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:35Z","lastTransitionTime":"2025-12-03T20:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:35 crc kubenswrapper[4765]: I1203 20:39:35.992721 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:35Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.005109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.020734 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.049746 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.063467 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.081600 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.095358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.095397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.095409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.095424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.095435 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.099239 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.109631 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.126794 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:34Z\\\",\\\"message\\\":\\\"UIDName:}]\\\\nI1203 20:39:34.215970 6763 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 20:39:34.216193 6763 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1203 20:39:34.216202 6763 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 20:39:34.216235 6763 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.137754 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:36Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.198053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.198403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.198525 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.198619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.198701 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.301388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.301744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.301933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.302131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.302277 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.359602 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.359673 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:36 crc kubenswrapper[4765]: E1203 20:39:36.359856 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.359928 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:36 crc kubenswrapper[4765]: E1203 20:39:36.360106 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:36 crc kubenswrapper[4765]: E1203 20:39:36.360255 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.404680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.404737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.404749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.404764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.404776 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.508267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.508374 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.508399 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.508430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.508452 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.612462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.612512 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.612524 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.612544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.612556 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.716170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.716227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.716244 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.716268 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.716285 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.819009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.819055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.819069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.819087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.819102 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.921987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.922065 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.922090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.922122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:36 crc kubenswrapper[4765]: I1203 20:39:36.922144 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:36Z","lastTransitionTime":"2025-12-03T20:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.025456 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.025521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.025547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.025579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.025604 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.128756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.128822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.128844 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.128871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.128894 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.232092 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.232148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.232165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.232187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.232203 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.334675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.334758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.334781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.334811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.334827 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.359099 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:37 crc kubenswrapper[4765]: E1203 20:39:37.359721 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.438085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.438131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.438142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.438157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.438165 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.541523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.541786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.541874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.541955 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.542042 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.645582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.645939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.646187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.646418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.646621 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.750055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.750128 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.750147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.750177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.750198 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.853521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.853612 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.853639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.853672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.853694 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.956565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.956626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.956643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.956668 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:37 crc kubenswrapper[4765]: I1203 20:39:37.956687 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:37Z","lastTransitionTime":"2025-12-03T20:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.060016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.060052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.060062 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.060076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.060086 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.162518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.162578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.162598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.162622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.162640 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.265210 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.265256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.265264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.265284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.265311 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.359584 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.359644 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:38 crc kubenswrapper[4765]: E1203 20:39:38.359763 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.359784 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:38 crc kubenswrapper[4765]: E1203 20:39:38.359899 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:38 crc kubenswrapper[4765]: E1203 20:39:38.360075 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.366914 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.366949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.366957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.366975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.366984 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.471002 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.471067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.471093 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.471118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.471137 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.574021 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.574136 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.574166 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.574198 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.574221 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.677252 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.677322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.677333 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.677356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.677369 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.780734 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.780805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.780827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.780858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.780878 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.884362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.884428 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.884445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.884472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.884489 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.987218 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.987336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.987368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.987398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:38 crc kubenswrapper[4765]: I1203 20:39:38.987423 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:38Z","lastTransitionTime":"2025-12-03T20:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.090142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.090198 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.090212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.090232 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.090246 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.192932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.192987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.193002 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.193021 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.193033 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.296447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.296528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.296551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.296580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.296602 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.359803 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:39 crc kubenswrapper[4765]: E1203 20:39:39.359979 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.400021 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.400066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.400076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.400091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.400101 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.502885 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.502925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.502934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.502950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.502959 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.606506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.606573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.606592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.606616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.606635 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.709793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.709859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.709881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.709910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.709931 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.813466 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.813514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.813529 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.813551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.813567 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.916675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.916791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.916810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.916842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:39 crc kubenswrapper[4765]: I1203 20:39:39.916858 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:39Z","lastTransitionTime":"2025-12-03T20:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.020445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.020556 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.020577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.020603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.020621 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.122714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.122752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.122766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.122784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.122798 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.225972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.226033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.226054 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.226081 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.226105 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.329586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.329901 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.330349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.330791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.331160 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.359205 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.359215 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:40 crc kubenswrapper[4765]: E1203 20:39:40.359437 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.359501 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:40 crc kubenswrapper[4765]: E1203 20:39:40.359598 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:40 crc kubenswrapper[4765]: E1203 20:39:40.359692 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.434068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.434144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.434163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.434187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.434203 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.537873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.537970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.537990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.538016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.538034 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.641667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.641729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.641746 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.641772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.641790 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.745003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.745325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.745510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.745673 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.745821 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.849600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.849668 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.849687 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.849711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.849729 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.953654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.954008 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.954178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.954377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:40 crc kubenswrapper[4765]: I1203 20:39:40.954538 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:40Z","lastTransitionTime":"2025-12-03T20:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.061485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.063043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.063075 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.063101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.063124 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.166676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.167356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.167527 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.167665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.167842 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.270243 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.270283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.270293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.270357 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.270381 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.359188 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:41 crc kubenswrapper[4765]: E1203 20:39:41.359382 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.372656 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.372812 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.372899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.373000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.373088 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.475012 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.475078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.475090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.475107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.475119 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.578729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.578775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.578792 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.578814 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.578831 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.681342 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.681382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.681394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.681410 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.681420 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.784365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.784393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.784401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.784415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.784423 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.886601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.886663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.886683 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.886712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.886734 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.989799 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.989872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.989895 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.989925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:41 crc kubenswrapper[4765]: I1203 20:39:41.989949 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:41Z","lastTransitionTime":"2025-12-03T20:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.093865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.093927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.093949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.093987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.094010 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.197718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.197767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.197786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.197809 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.197825 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.300676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.300745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.300769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.300799 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.300822 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.359532 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:42 crc kubenswrapper[4765]: E1203 20:39:42.359753 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.360105 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:42 crc kubenswrapper[4765]: E1203 20:39:42.360322 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.360128 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:42 crc kubenswrapper[4765]: E1203 20:39:42.360523 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.379516 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.395681 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f89a3e7b-e621-4e0c-95be-586218807c8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f80bd16911afb2c5bd97a20b144eeb2f5660a0a08afeddfca23ed56cd47e96be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f206027d8ed81e79073760357ab8a2063802310e24623b4a71d3fc0520778144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tffjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tblsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.404644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.404716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.404738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.404767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.404791 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.415750 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 20:38:40.277453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 20:38:40.277690 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 20:38:40.279389 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3275496424/tls.crt::/tmp/serving-cert-3275496424/tls.key\\\\\\\"\\\\nI1203 20:38:40.649784 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 20:38:40.654030 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 20:38:40.654172 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 20:38:40.654269 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 20:38:40.654374 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 20:38:40.664023 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1203 20:38:40.664036 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1203 20:38:40.664061 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664067 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 20:38:40.664073 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 20:38:40.664078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 20:38:40.664082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 20:38:40.664086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1203 20:38:40.666817 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.431561 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72bb2c59-8b4e-48b7-821e-e1b3c786b48e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b915b859638488cec344bc3bb01b90ede8a531169079652580da2dafd136dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c5fcc122b4a296feb69b1f92502568de832a0ffc98122311755924470e90e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee6a4f9a14e1f0387be019095fbcb17e99d263b486b686c7d92f1d6e4e78bcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.445933 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.468898 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.487776 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.507909 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.508166 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.508250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.508389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.508492 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.511970 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8556712d-ec86-4153-bcb0-cb9c261db743\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db51fa6f940b71c7a3b3b306413bd83b6da933f8d81029eaa5bb43b352130cfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44b48c128c0f8a2196cb765b7b5daa3c08bcebfeb8189c77e1b7d42139fafabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f96e9da90b4a1c313037577cebe8cba94a5e724fe8e4ee84d5b402f2f41b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae00288a87f76227a2339002f924122163102056c75fe866a7115b359d1ae32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9df0e2111ea18350b55ef5a6027c59e536293bb9b9611e67f116b6b7a949a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25a9323bcff40172c76c40445ae6b537eaef3003455dc6ee38f5cbba9d676704\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1800b1d9ac2534d769b42805c547e73680ca11a0625cd10e846b25f419dbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7321f2754cea9d7e64ad684e465e7772065ed009bb638559b188fef936ef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.525289 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db073be8ed934793e6e3de2f3df9fc261539538df65e6ea89d30e0203e8db365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.536286 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2670be8-9fe5-4210-ba7f-9538bbea79b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqkj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9bhn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.549149 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.567175 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a51776748e82873879ef0b70c146648fd6b8d2ce86e1e92a5d105c714d7847a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.579037 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b2gnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ecd2b1c-dbec-4009-b852-74060586afa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94803e7c16e871c22833f8ef34e3f99d34b5ae1f705c6b34c37d4de517d9c4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2657l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b2gnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.602910 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:34Z\\\",\\\"message\\\":\\\"UIDName:}]\\\\nI1203 20:39:34.215970 6763 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 20:39:34.216193 6763 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1203 20:39:34.216202 6763 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 20:39:34.216235 6763 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:39:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf4c4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9dzdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.612033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.612131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.612148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.612207 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.612227 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.617171 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.630153 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c25bb7542d3f485c37f6c6710f5cca6a06bddb24140a0e7d29eb8bc4774f13d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-swqqp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.643326 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p9xkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d91ef96-b0c9-43eb-8d49-e522199942c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T20:39:28Z\\\",\\\"message\\\":\\\"2025-12-03T20:38:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5\\\\n2025-12-03T20:38:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7d1f9097-cf86-4885-9114-8fb53077eea5 to /host/opt/cni/bin/\\\\n2025-12-03T20:38:43Z [verbose] multus-daemon started\\\\n2025-12-03T20:38:43Z [verbose] Readiness Indicator file check\\\\n2025-12-03T20:39:28Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lj426\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p9xkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.655526 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lbw96" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a392f98a-e249-4c20-a5b0-aeddb4cc0ad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5612bfa98d54c6d07838a8b2e14fd0a005a484803adf626f0386ef0c9818fc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w7zlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lbw96\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:42Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.714513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.714585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.714603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.714626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.714640 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.816961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.817386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.817568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.817759 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.817901 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.921453 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.921845 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.922046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.922203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:42 crc kubenswrapper[4765]: I1203 20:39:42.922401 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:42Z","lastTransitionTime":"2025-12-03T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.024880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.025289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.025585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.025759 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.025905 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.128655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.128697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.128706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.128720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.128729 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.232173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.232242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.232264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.232344 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.232372 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.335919 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.335988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.336022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.336047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.336065 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.359237 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:43 crc kubenswrapper[4765]: E1203 20:39:43.359497 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.439563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.439638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.439655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.439686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.439704 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.543189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.543284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.543355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.543384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.543404 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.647281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.647374 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.647395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.647420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.647439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.750980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.751043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.751060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.751085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.751104 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.854043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.854106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.854130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.854158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.854178 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.956893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.956950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.956963 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.956982 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:43 crc kubenswrapper[4765]: I1203 20:39:43.956996 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:43Z","lastTransitionTime":"2025-12-03T20:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.060859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.060917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.060929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.060947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.060959 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.164144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.164221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.164245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.164272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.164290 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.242168 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242365 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.242339897 +0000 UTC m=+146.172885068 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.242462 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.242522 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.242557 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242690 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242699 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242712 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242711 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242800 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242781 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.242758859 +0000 UTC m=+146.173304050 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242865 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.242843831 +0000 UTC m=+146.173389022 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.242899 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.242885494 +0000 UTC m=+146.173430685 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.267661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.267721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.267739 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.267800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.267824 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.343168 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.343472 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.343520 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.343551 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.343636 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.343607379 +0000 UTC m=+146.274152570 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.359262 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.359396 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.359415 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.359560 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.359881 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:44 crc kubenswrapper[4765]: E1203 20:39:44.360178 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.370650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.370701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.370720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.370748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.370771 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.474187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.474253 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.474275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.474347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.474373 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.577294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.577387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.577405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.577429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.577450 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.680331 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.680397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.680453 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.680480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.680498 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.783539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.783593 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.783606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.783624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.783639 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.886027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.886095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.886217 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.886244 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.886263 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.989718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.990381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.990536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.990725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:44 crc kubenswrapper[4765]: I1203 20:39:44.990901 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:44Z","lastTransitionTime":"2025-12-03T20:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.093495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.093545 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.093559 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.093578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.093592 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.196973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.197709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.197747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.197770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.197786 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.300486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.300541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.300553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.300568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.300580 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.359650 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.359860 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.403336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.403562 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.403674 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.403767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.403861 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.507353 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.507714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.507847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.508029 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.508165 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.531399 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.531666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.531811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.531961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.532088 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.554097 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.561066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.561359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.561520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.561719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.561880 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.583670 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.589254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.589488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.589727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.590604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.590644 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.613788 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.619142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.619205 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.619227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.619257 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.619282 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.640444 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.646085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.646360 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.646539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.646972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.647169 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.665995 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9228a112-591f-4ddf-8f52-901f725e75be\\\",\\\"systemUUID\\\":\\\"139d9191-2874-499e-a609-baf6bc364e88\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:45Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:45 crc kubenswrapper[4765]: E1203 20:39:45.666208 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.668155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.668196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.668213 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.668234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.668250 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.771086 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.771437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.771614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.771742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.771880 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.875291 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.875375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.875394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.875417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.875432 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.978510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.978566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.978586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.978610 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:45 crc kubenswrapper[4765]: I1203 20:39:45.978627 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:45Z","lastTransitionTime":"2025-12-03T20:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.082018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.082084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.082108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.082137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.082160 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.185336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.185397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.185415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.185439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.185457 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.288239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.289149 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.289355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.289522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.289686 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.359525 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.359531 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:46 crc kubenswrapper[4765]: E1203 20:39:46.360268 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.359685 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:46 crc kubenswrapper[4765]: E1203 20:39:46.360534 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:46 crc kubenswrapper[4765]: E1203 20:39:46.360105 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.393360 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.393408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.393427 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.393449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.393467 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.496500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.496555 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.496573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.496785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.496810 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.600663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.600720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.600738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.600765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.600781 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.704551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.704617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.704639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.704670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.704693 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.807520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.807597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.807622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.807650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.807671 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.910098 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.910408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.910501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.910591 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:46 crc kubenswrapper[4765]: I1203 20:39:46.910680 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:46Z","lastTransitionTime":"2025-12-03T20:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.013224 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.013286 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.013332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.013359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.013385 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.116193 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.116480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.116546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.116610 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.116670 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.220236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.220267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.220274 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.220288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.220314 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.323325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.323373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.323390 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.323415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.323435 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.359692 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:47 crc kubenswrapper[4765]: E1203 20:39:47.359906 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.361147 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:39:47 crc kubenswrapper[4765]: E1203 20:39:47.361440 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.426003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.426066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.426090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.426120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.426142 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.530407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.530482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.530504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.530538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.530560 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.633458 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.633807 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.633994 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.634181 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.634600 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.737554 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.737646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.737673 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.737698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.737716 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.840727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.840778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.840795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.840819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.840839 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.943585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.943670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.943695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.943725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:47 crc kubenswrapper[4765]: I1203 20:39:47.943747 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:47Z","lastTransitionTime":"2025-12-03T20:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.047216 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.047354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.047382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.047413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.047439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.151137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.151208 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.151229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.151259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.151281 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.253745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.253806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.253817 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.253835 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.253848 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.357191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.357253 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.357277 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.357340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.357368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.359811 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.359841 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.359917 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:48 crc kubenswrapper[4765]: E1203 20:39:48.360095 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:48 crc kubenswrapper[4765]: E1203 20:39:48.360429 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:48 crc kubenswrapper[4765]: E1203 20:39:48.360512 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.459984 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.460051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.460074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.460105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.460126 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.563287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.563686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.563860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.564074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.564241 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.666887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.666957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.666972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.666993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.667006 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.770314 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.770609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.770710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.770821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.770909 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.873134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.873589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.873993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.874222 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.874445 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.977712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.977808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.977826 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.977852 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:48 crc kubenswrapper[4765]: I1203 20:39:48.977873 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:48Z","lastTransitionTime":"2025-12-03T20:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.081186 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.081262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.081282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.081334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.081352 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.184805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.184881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.184904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.184933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.184956 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.287811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.287853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.287862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.287877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.287886 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.358789 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:49 crc kubenswrapper[4765]: E1203 20:39:49.359854 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.390693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.390752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.390768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.390791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.390811 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.493457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.493534 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.493557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.493586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.493603 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.596645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.596730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.596748 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.596802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.596824 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.699179 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.699245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.699271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.699341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.699367 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.803037 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.803106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.803128 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.803152 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.803173 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.906004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.906068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.906091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.906125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:49 crc kubenswrapper[4765]: I1203 20:39:49.906148 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:49Z","lastTransitionTime":"2025-12-03T20:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.009395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.009454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.009473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.009495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.009512 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.113116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.113166 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.113183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.113206 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.113222 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.215834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.215887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.215904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.215927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.215944 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.319486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.319542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.319558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.319581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.319598 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.359171 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:50 crc kubenswrapper[4765]: E1203 20:39:50.359362 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.359419 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:50 crc kubenswrapper[4765]: E1203 20:39:50.359572 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.359942 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:50 crc kubenswrapper[4765]: E1203 20:39:50.360180 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.422893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.422955 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.422972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.422995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.423012 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.526241 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.526337 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.526356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.526380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.526397 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.629433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.629493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.629512 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.629535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.629551 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.732358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.732494 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.732513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.732535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.732583 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.836155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.836224 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.836248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.836277 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.836360 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.939386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.939466 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.939489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.939517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:50 crc kubenswrapper[4765]: I1203 20:39:50.939542 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:50Z","lastTransitionTime":"2025-12-03T20:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.042744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.042822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.042841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.042873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.042893 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.146645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.146718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.146737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.146768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.146786 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.250174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.250253 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.250274 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.250331 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.250351 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.353212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.353264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.353280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.353332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.353353 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.358998 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:51 crc kubenswrapper[4765]: E1203 20:39:51.359496 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.456291 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.456368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.456384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.456404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.456419 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.559283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.559570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.559667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.559805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.559900 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.662828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.662893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.662911 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.662936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.662954 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.766988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.767419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.767565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.767735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.767909 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.871848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.871908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.871925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.871948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.871965 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.975476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.975551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.975576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.975607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:51 crc kubenswrapper[4765]: I1203 20:39:51.975628 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:51Z","lastTransitionTime":"2025-12-03T20:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.078726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.078785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.078802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.078827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.078846 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.182032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.182074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.182085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.182103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.182115 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.285100 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.285164 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.285191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.285221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.285246 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.359935 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.359935 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:52 crc kubenswrapper[4765]: E1203 20:39:52.360124 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.359954 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:52 crc kubenswrapper[4765]: E1203 20:39:52.360274 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:52 crc kubenswrapper[4765]: E1203 20:39:52.360573 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.376085 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a58a551c-6d82-4d61-a9bd-ac085d4453f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21fd9960dea602e55c2ff4c2108223a6eba0b6550ead9cb2b951a4f5a69d4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1312e48ba7c71d4778150cf3978d24ee181fbf028f3a08e2283825a36484768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4860ad3c763b82eaa7773c62f53694d372e5d84dcc803454474ffa481bd86191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05fe0addcb869c92b8db99af028b4d132c3a8d52b181cce2e37f1bfd1e9c05fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.387832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.387907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.387932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.387960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.387977 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.398892 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db5e688da190870c2b7b0646b214e4fa7401109a4b351e59a21a394b1bb1683b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.417965 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.435783 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-952wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fea2b65-0ca8-4eb1-ab58-0e990be1d0a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T20:38:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ae73eabbb5b65e275bd226c797402631a99fc98c08abead77d09176d4976023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T20:38:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84b3223bf169e7b04506fa9b7319d70e4cd64e5293e7a147cc9a6ef84933ca67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba04be36606d6ee6d355b1ec675d15254019ccab60b089d7f0b35f3a0c223694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e36930772dc84bf1dfa1e40672551744f6fd6eaa868f0ac6c8027c300cd4858d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://107030ed62ab7bf4d3c3a28ae47c1e247300d494319d3e98cf788e7548666dbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a64233f0a986b2049d398dbc087b79ba05ce9d21124fd16fc9343e50d4bb495\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccaaa0451248c525825366dd43d1789254421e3a11512bb89582555684f030a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T20:38:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T20:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d86ml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T20:38:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-952wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T20:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491728 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491742 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.491963 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tblsb" podStartSLOduration=71.491919279 podStartE2EDuration="1m11.491919279s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.469073839 +0000 UTC m=+90.399619030" watchObservedRunningTime="2025-12-03 20:39:52.491919279 +0000 UTC m=+90.422464450" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.492362 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.492351411 podStartE2EDuration="1m11.492351411s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.492228717 +0000 UTC m=+90.422773918" watchObservedRunningTime="2025-12-03 20:39:52.492351411 +0000 UTC m=+90.422896572" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.538290 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.538274748 podStartE2EDuration="1m10.538274748s" podCreationTimestamp="2025-12-03 20:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.537940179 +0000 UTC m=+90.468485330" watchObservedRunningTime="2025-12-03 20:39:52.538274748 +0000 UTC m=+90.468819899" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.538502 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.538497975 podStartE2EDuration="1m10.538497975s" podCreationTimestamp="2025-12-03 20:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.51164376 +0000 UTC m=+90.442188911" watchObservedRunningTime="2025-12-03 20:39:52.538497975 +0000 UTC m=+90.469043126" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.585407 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-b2gnt" podStartSLOduration=71.585389808 podStartE2EDuration="1m11.585389808s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.585020948 +0000 UTC m=+90.515566099" watchObservedRunningTime="2025-12-03 20:39:52.585389808 +0000 UTC m=+90.515934959" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.593752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.593777 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.593785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.593798 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.593806 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.668157 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-p9xkg" podStartSLOduration=71.668137664 podStartE2EDuration="1m11.668137664s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.667933228 +0000 UTC m=+90.598478379" watchObservedRunningTime="2025-12-03 20:39:52.668137664 +0000 UTC m=+90.598682815" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.677644 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lbw96" podStartSLOduration=71.677619303 podStartE2EDuration="1m11.677619303s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.677602403 +0000 UTC m=+90.608147564" watchObservedRunningTime="2025-12-03 20:39:52.677619303 +0000 UTC m=+90.608164464" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.696281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.696351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.696362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.696380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.696392 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.701342 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podStartSLOduration=71.701321118 podStartE2EDuration="1m11.701321118s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:52.700724922 +0000 UTC m=+90.631270093" watchObservedRunningTime="2025-12-03 20:39:52.701321118 +0000 UTC m=+90.631866279" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.799886 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.799926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.799934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.799950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.799959 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.903229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.903272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.903285 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.903338 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:52 crc kubenswrapper[4765]: I1203 20:39:52.903351 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:52Z","lastTransitionTime":"2025-12-03T20:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.006836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.007280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.007467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.007607 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.007787 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.111832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.111924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.111944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.111968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.111985 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.215663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.215736 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.215749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.215774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.215790 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.317960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.318024 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.318044 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.318071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.318092 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.359844 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:53 crc kubenswrapper[4765]: E1203 20:39:53.360150 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.373707 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.425778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.425837 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.425852 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.425876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.425889 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.528528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.528564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.528575 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.528592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.528604 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.630903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.630948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.630959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.630981 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.630995 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.733190 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.733523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.733670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.733802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.733924 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.836671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.837038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.837219 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.837538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.837780 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.940540 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.940588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.940604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.940627 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:53 crc kubenswrapper[4765]: I1203 20:39:53.940644 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:53Z","lastTransitionTime":"2025-12-03T20:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.043943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.043991 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.044009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.044034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.044052 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.146600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.146980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.147173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.147400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.147552 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.251658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.251718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.251729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.251751 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.251763 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.354426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.354542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.354590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.354623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.354643 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.360564 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:54 crc kubenswrapper[4765]: E1203 20:39:54.360703 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.361447 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:54 crc kubenswrapper[4765]: E1203 20:39:54.361513 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.361710 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:54 crc kubenswrapper[4765]: E1203 20:39:54.361765 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.457661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.457711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.457729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.457751 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.457766 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.560799 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.560869 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.560892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.560930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.560972 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.664052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.664101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.664118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.664140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.664160 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.767486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.767558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.767574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.767597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.767614 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.870456 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.870511 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.870528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.870552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.870568 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.973227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.973623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.973846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.974034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:54 crc kubenswrapper[4765]: I1203 20:39:54.974193 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:54Z","lastTransitionTime":"2025-12-03T20:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.077869 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.077950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.077971 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.078001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.078024 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.181004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.181111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.181135 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.181172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.181193 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.284341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.284420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.284445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.284476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.284497 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.359344 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:55 crc kubenswrapper[4765]: E1203 20:39:55.359572 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.389872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.389933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.389950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.389975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.390008 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.493822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.494217 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.494553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.494830 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.495364 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.599071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.599117 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.599133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.599155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.599173 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.702927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.703212 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.703383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.703558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.703700 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.806737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.807156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.807458 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.807679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.807885 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.911000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.911430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.911625 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.911798 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:55 crc kubenswrapper[4765]: I1203 20:39:55.911935 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:55Z","lastTransitionTime":"2025-12-03T20:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.015477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.015560 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.015583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.015618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.015644 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T20:39:56Z","lastTransitionTime":"2025-12-03T20:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.085193 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw"] Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.085827 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.088722 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.089100 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.089788 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.091733 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.127693 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.127668393 podStartE2EDuration="3.127668393s" podCreationTimestamp="2025-12-03 20:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:56.10680889 +0000 UTC m=+94.037354081" watchObservedRunningTime="2025-12-03 20:39:56.127668393 +0000 UTC m=+94.058213594" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.150900 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.150877983 podStartE2EDuration="40.150877983s" podCreationTimestamp="2025-12-03 20:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:56.129691511 +0000 UTC m=+94.060236662" watchObservedRunningTime="2025-12-03 20:39:56.150877983 +0000 UTC m=+94.081423144" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.171191 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f70f5192-415b-45f4-8327-df68109f375a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.171234 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.171260 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.171412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f70f5192-415b-45f4-8327-df68109f375a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.171457 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f70f5192-415b-45f4-8327-df68109f375a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.185996 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-952wr" podStartSLOduration=75.185971392 podStartE2EDuration="1m15.185971392s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:56.185270673 +0000 UTC m=+94.115815844" watchObservedRunningTime="2025-12-03 20:39:56.185971392 +0000 UTC m=+94.116516583" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.271867 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f70f5192-415b-45f4-8327-df68109f375a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.271901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f70f5192-415b-45f4-8327-df68109f375a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.271948 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f70f5192-415b-45f4-8327-df68109f375a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.271964 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.271981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.272047 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.272450 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f70f5192-415b-45f4-8327-df68109f375a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.273446 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f70f5192-415b-45f4-8327-df68109f375a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.278152 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f70f5192-415b-45f4-8327-df68109f375a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.288057 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f70f5192-415b-45f4-8327-df68109f375a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6z9dw\" (UID: \"f70f5192-415b-45f4-8327-df68109f375a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.359372 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.359415 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.359481 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:56 crc kubenswrapper[4765]: E1203 20:39:56.359568 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:56 crc kubenswrapper[4765]: E1203 20:39:56.359889 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:56 crc kubenswrapper[4765]: E1203 20:39:56.360143 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.410533 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" Dec 03 20:39:56 crc kubenswrapper[4765]: W1203 20:39:56.429206 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf70f5192_415b_45f4_8327_df68109f375a.slice/crio-1189cc047e60d2194778e833149990fd0e7bef797a5c28ea3db8b60fb937495a WatchSource:0}: Error finding container 1189cc047e60d2194778e833149990fd0e7bef797a5c28ea3db8b60fb937495a: Status 404 returned error can't find the container with id 1189cc047e60d2194778e833149990fd0e7bef797a5c28ea3db8b60fb937495a Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.947211 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" event={"ID":"f70f5192-415b-45f4-8327-df68109f375a","Type":"ContainerStarted","Data":"f071be5da4838fc514d4c8a26cd66f4102a299d8b98d58891212516c1802c32f"} Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.947828 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" event={"ID":"f70f5192-415b-45f4-8327-df68109f375a","Type":"ContainerStarted","Data":"1189cc047e60d2194778e833149990fd0e7bef797a5c28ea3db8b60fb937495a"} Dec 03 20:39:56 crc kubenswrapper[4765]: I1203 20:39:56.964671 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6z9dw" podStartSLOduration=75.964651064 podStartE2EDuration="1m15.964651064s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:39:56.962957865 +0000 UTC m=+94.893503046" watchObservedRunningTime="2025-12-03 20:39:56.964651064 +0000 UTC m=+94.895196245" Dec 03 20:39:57 crc kubenswrapper[4765]: I1203 20:39:57.359603 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:57 crc kubenswrapper[4765]: E1203 20:39:57.359874 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:39:58 crc kubenswrapper[4765]: I1203 20:39:58.359812 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:39:58 crc kubenswrapper[4765]: I1203 20:39:58.359854 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:39:58 crc kubenswrapper[4765]: E1203 20:39:58.359942 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:39:58 crc kubenswrapper[4765]: I1203 20:39:58.360029 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:39:58 crc kubenswrapper[4765]: E1203 20:39:58.360201 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:39:58 crc kubenswrapper[4765]: E1203 20:39:58.360490 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:39:59 crc kubenswrapper[4765]: I1203 20:39:59.359056 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:39:59 crc kubenswrapper[4765]: E1203 20:39:59.359226 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:00 crc kubenswrapper[4765]: I1203 20:40:00.010479 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:00 crc kubenswrapper[4765]: E1203 20:40:00.010753 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:40:00 crc kubenswrapper[4765]: E1203 20:40:00.010851 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs podName:d2670be8-9fe5-4210-ba7f-9538bbea79b8 nodeName:}" failed. No retries permitted until 2025-12-03 20:41:04.010818777 +0000 UTC m=+161.941363968 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs") pod "network-metrics-daemon-9bhn8" (UID: "d2670be8-9fe5-4210-ba7f-9538bbea79b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 20:40:00 crc kubenswrapper[4765]: I1203 20:40:00.359207 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:00 crc kubenswrapper[4765]: I1203 20:40:00.359360 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:00 crc kubenswrapper[4765]: E1203 20:40:00.359421 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:00 crc kubenswrapper[4765]: E1203 20:40:00.359538 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:00 crc kubenswrapper[4765]: I1203 20:40:00.359744 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:00 crc kubenswrapper[4765]: E1203 20:40:00.359855 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:01 crc kubenswrapper[4765]: I1203 20:40:01.359644 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:01 crc kubenswrapper[4765]: E1203 20:40:01.360705 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:02 crc kubenswrapper[4765]: I1203 20:40:02.359121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:02 crc kubenswrapper[4765]: I1203 20:40:02.359692 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:02 crc kubenswrapper[4765]: I1203 20:40:02.361290 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:02 crc kubenswrapper[4765]: E1203 20:40:02.361423 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:02 crc kubenswrapper[4765]: E1203 20:40:02.361522 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:02 crc kubenswrapper[4765]: E1203 20:40:02.361598 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:02 crc kubenswrapper[4765]: I1203 20:40:02.361884 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:40:02 crc kubenswrapper[4765]: E1203 20:40:02.362151 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9dzdh_openshift-ovn-kubernetes(ad2eb102-7abd-48ad-8287-ab7d2d8a4166)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" Dec 03 20:40:03 crc kubenswrapper[4765]: I1203 20:40:03.359644 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:03 crc kubenswrapper[4765]: E1203 20:40:03.360230 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:04 crc kubenswrapper[4765]: I1203 20:40:04.359596 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:04 crc kubenswrapper[4765]: E1203 20:40:04.359702 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:04 crc kubenswrapper[4765]: I1203 20:40:04.359868 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:04 crc kubenswrapper[4765]: E1203 20:40:04.359907 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:04 crc kubenswrapper[4765]: I1203 20:40:04.359990 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:04 crc kubenswrapper[4765]: E1203 20:40:04.360093 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:05 crc kubenswrapper[4765]: I1203 20:40:05.359884 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:05 crc kubenswrapper[4765]: E1203 20:40:05.360095 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:06 crc kubenswrapper[4765]: I1203 20:40:06.359268 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:06 crc kubenswrapper[4765]: I1203 20:40:06.359392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:06 crc kubenswrapper[4765]: I1203 20:40:06.359425 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:06 crc kubenswrapper[4765]: E1203 20:40:06.359544 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:06 crc kubenswrapper[4765]: E1203 20:40:06.359670 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:06 crc kubenswrapper[4765]: E1203 20:40:06.359792 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:07 crc kubenswrapper[4765]: I1203 20:40:07.359698 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:07 crc kubenswrapper[4765]: E1203 20:40:07.359932 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:08 crc kubenswrapper[4765]: I1203 20:40:08.358951 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:08 crc kubenswrapper[4765]: I1203 20:40:08.359041 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:08 crc kubenswrapper[4765]: E1203 20:40:08.359159 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:08 crc kubenswrapper[4765]: I1203 20:40:08.359208 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:08 crc kubenswrapper[4765]: E1203 20:40:08.359518 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:08 crc kubenswrapper[4765]: E1203 20:40:08.359670 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:09 crc kubenswrapper[4765]: I1203 20:40:09.359934 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:09 crc kubenswrapper[4765]: E1203 20:40:09.360158 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:10 crc kubenswrapper[4765]: I1203 20:40:10.359443 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:10 crc kubenswrapper[4765]: I1203 20:40:10.359534 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:10 crc kubenswrapper[4765]: I1203 20:40:10.359450 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:10 crc kubenswrapper[4765]: E1203 20:40:10.360216 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:10 crc kubenswrapper[4765]: E1203 20:40:10.360398 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:10 crc kubenswrapper[4765]: E1203 20:40:10.360084 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:11 crc kubenswrapper[4765]: I1203 20:40:11.358905 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:11 crc kubenswrapper[4765]: E1203 20:40:11.359132 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:12 crc kubenswrapper[4765]: I1203 20:40:12.358985 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:12 crc kubenswrapper[4765]: E1203 20:40:12.360149 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:12 crc kubenswrapper[4765]: I1203 20:40:12.360172 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:12 crc kubenswrapper[4765]: I1203 20:40:12.360258 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:12 crc kubenswrapper[4765]: E1203 20:40:12.360447 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:12 crc kubenswrapper[4765]: E1203 20:40:12.360684 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:13 crc kubenswrapper[4765]: I1203 20:40:13.359040 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:13 crc kubenswrapper[4765]: E1203 20:40:13.359248 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:14 crc kubenswrapper[4765]: I1203 20:40:14.359070 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:14 crc kubenswrapper[4765]: I1203 20:40:14.359121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:14 crc kubenswrapper[4765]: E1203 20:40:14.359269 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:14 crc kubenswrapper[4765]: I1203 20:40:14.359395 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:14 crc kubenswrapper[4765]: E1203 20:40:14.359422 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:14 crc kubenswrapper[4765]: E1203 20:40:14.359609 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.416750 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:15 crc kubenswrapper[4765]: E1203 20:40:15.417139 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.422623 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/1.log" Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.423189 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/0.log" Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.423224 4765 generic.go:334] "Generic (PLEG): container finished" podID="2d91ef96-b0c9-43eb-8d49-e522199942c9" containerID="0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1" exitCode=1 Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.423253 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerDied","Data":"0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1"} Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.423281 4765 scope.go:117] "RemoveContainer" containerID="20d974c15ad39764f7ead67faf38a5807bb3e849a0460d15adedf0da35d1bea3" Dec 03 20:40:15 crc kubenswrapper[4765]: I1203 20:40:15.423764 4765 scope.go:117] "RemoveContainer" containerID="0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1" Dec 03 20:40:15 crc kubenswrapper[4765]: E1203 20:40:15.423894 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-p9xkg_openshift-multus(2d91ef96-b0c9-43eb-8d49-e522199942c9)\"" pod="openshift-multus/multus-p9xkg" podUID="2d91ef96-b0c9-43eb-8d49-e522199942c9" Dec 03 20:40:16 crc kubenswrapper[4765]: I1203 20:40:16.359488 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:16 crc kubenswrapper[4765]: E1203 20:40:16.360010 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:16 crc kubenswrapper[4765]: I1203 20:40:16.359658 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:16 crc kubenswrapper[4765]: E1203 20:40:16.360134 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:16 crc kubenswrapper[4765]: I1203 20:40:16.359614 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:16 crc kubenswrapper[4765]: E1203 20:40:16.360348 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:16 crc kubenswrapper[4765]: I1203 20:40:16.428092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/1.log" Dec 03 20:40:17 crc kubenswrapper[4765]: I1203 20:40:17.359340 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:17 crc kubenswrapper[4765]: E1203 20:40:17.359791 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:17 crc kubenswrapper[4765]: I1203 20:40:17.360074 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.289561 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9bhn8"] Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.289659 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:18 crc kubenswrapper[4765]: E1203 20:40:18.289743 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.359183 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.359225 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:18 crc kubenswrapper[4765]: E1203 20:40:18.359440 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:18 crc kubenswrapper[4765]: E1203 20:40:18.359753 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.359881 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:18 crc kubenswrapper[4765]: E1203 20:40:18.359956 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.436071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/3.log" Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.438918 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerStarted","Data":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} Dec 03 20:40:18 crc kubenswrapper[4765]: I1203 20:40:18.439382 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:40:20 crc kubenswrapper[4765]: I1203 20:40:20.359417 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:20 crc kubenswrapper[4765]: E1203 20:40:20.359843 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:20 crc kubenswrapper[4765]: I1203 20:40:20.359418 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:20 crc kubenswrapper[4765]: I1203 20:40:20.359477 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:20 crc kubenswrapper[4765]: E1203 20:40:20.359989 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:20 crc kubenswrapper[4765]: E1203 20:40:20.360060 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:20 crc kubenswrapper[4765]: I1203 20:40:20.359448 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:20 crc kubenswrapper[4765]: E1203 20:40:20.360140 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.325197 4765 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 20:40:22 crc kubenswrapper[4765]: I1203 20:40:22.359108 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:22 crc kubenswrapper[4765]: I1203 20:40:22.359184 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:22 crc kubenswrapper[4765]: I1203 20:40:22.359243 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.360912 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:22 crc kubenswrapper[4765]: I1203 20:40:22.360996 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.361151 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.361245 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.361526 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:22 crc kubenswrapper[4765]: E1203 20:40:22.449273 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 20:40:24 crc kubenswrapper[4765]: I1203 20:40:24.359494 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:24 crc kubenswrapper[4765]: I1203 20:40:24.359601 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:24 crc kubenswrapper[4765]: I1203 20:40:24.359675 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:24 crc kubenswrapper[4765]: E1203 20:40:24.359669 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:24 crc kubenswrapper[4765]: I1203 20:40:24.359824 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:24 crc kubenswrapper[4765]: E1203 20:40:24.359852 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:24 crc kubenswrapper[4765]: E1203 20:40:24.359900 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:24 crc kubenswrapper[4765]: E1203 20:40:24.359968 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.359518 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.359559 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.359643 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:26 crc kubenswrapper[4765]: E1203 20:40:26.359676 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:26 crc kubenswrapper[4765]: E1203 20:40:26.359827 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:26 crc kubenswrapper[4765]: E1203 20:40:26.359933 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.360338 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:26 crc kubenswrapper[4765]: E1203 20:40:26.360445 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.432220 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:40:26 crc kubenswrapper[4765]: I1203 20:40:26.471113 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podStartSLOduration=105.471058903 podStartE2EDuration="1m45.471058903s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:18.472457435 +0000 UTC m=+116.403002606" watchObservedRunningTime="2025-12-03 20:40:26.471058903 +0000 UTC m=+124.401604094" Dec 03 20:40:27 crc kubenswrapper[4765]: I1203 20:40:27.360103 4765 scope.go:117] "RemoveContainer" containerID="0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1" Dec 03 20:40:27 crc kubenswrapper[4765]: E1203 20:40:27.450552 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.359934 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.359977 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.359998 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:28 crc kubenswrapper[4765]: E1203 20:40:28.362588 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.362917 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:28 crc kubenswrapper[4765]: E1203 20:40:28.363076 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:28 crc kubenswrapper[4765]: E1203 20:40:28.363347 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:28 crc kubenswrapper[4765]: E1203 20:40:28.363614 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.473539 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/1.log" Dec 03 20:40:28 crc kubenswrapper[4765]: I1203 20:40:28.473605 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerStarted","Data":"9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da"} Dec 03 20:40:30 crc kubenswrapper[4765]: I1203 20:40:30.359706 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:30 crc kubenswrapper[4765]: I1203 20:40:30.359762 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:30 crc kubenswrapper[4765]: I1203 20:40:30.359725 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:30 crc kubenswrapper[4765]: I1203 20:40:30.359705 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:30 crc kubenswrapper[4765]: E1203 20:40:30.359839 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:30 crc kubenswrapper[4765]: E1203 20:40:30.359975 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:30 crc kubenswrapper[4765]: E1203 20:40:30.360069 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:30 crc kubenswrapper[4765]: E1203 20:40:30.360181 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:32 crc kubenswrapper[4765]: I1203 20:40:32.359800 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:32 crc kubenswrapper[4765]: E1203 20:40:32.360705 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 20:40:32 crc kubenswrapper[4765]: I1203 20:40:32.360883 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:32 crc kubenswrapper[4765]: I1203 20:40:32.360915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:32 crc kubenswrapper[4765]: E1203 20:40:32.360962 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 20:40:32 crc kubenswrapper[4765]: I1203 20:40:32.361052 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:32 crc kubenswrapper[4765]: E1203 20:40:32.361093 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 20:40:32 crc kubenswrapper[4765]: E1203 20:40:32.361201 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9bhn8" podUID="d2670be8-9fe5-4210-ba7f-9538bbea79b8" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.359514 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.359632 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.359770 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.359698 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.362099 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.362128 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.363507 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.363627 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.364123 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 20:40:34 crc kubenswrapper[4765]: I1203 20:40:34.364775 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.041488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.087509 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.088390 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.089464 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkvvm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.090115 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.092417 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.092804 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.099882 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.100317 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122164 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122358 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122504 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122374 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122847 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.122780 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.123163 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.123404 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.123569 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.124614 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.124819 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125027 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125181 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125317 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125360 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125621 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125761 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.125767 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126067 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126081 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126243 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126261 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126834 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.126994 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.127184 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.127675 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130259 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130422 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-client\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130459 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130503 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130536 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-node-pullsecrets\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130570 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130603 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsf69\" (UniqueName: \"kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130698 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.130744 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.133799 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46h8d"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.134065 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.134204 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.135512 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.136948 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.136990 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.137587 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.137960 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.138097 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vj5h7"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.138384 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.138598 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.140393 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.141138 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.141155 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.142241 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.142328 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.143023 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.144133 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.144758 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.145109 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.145518 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.148422 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.148916 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.156361 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.156984 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.157232 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.157278 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-l6r9q"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.157384 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.157836 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.158374 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2x88"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.158867 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.159411 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.159667 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160023 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160054 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160514 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160656 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160681 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.160943 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.164079 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.179014 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7jkhw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.179875 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.183605 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.183691 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.183918 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.183985 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.184030 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.184124 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.184161 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.184387 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.184520 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.191196 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.192489 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2hvlc"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.194569 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.194671 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.194851 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195014 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195056 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.205007 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195069 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.205161 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195198 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195320 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195398 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195404 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195477 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.195870 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196552 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196608 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196752 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196820 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.206001 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196864 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196897 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196929 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196996 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.206454 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.196996 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197135 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197198 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197252 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197324 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.206930 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.207329 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.207416 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c5s6t"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.207575 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.207715 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197377 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197425 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197489 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197627 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197853 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.197952 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198038 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198067 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198097 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198139 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198175 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198181 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198212 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198225 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198284 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198354 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198362 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198405 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198440 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.209428 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198440 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198485 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198518 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198564 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198577 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198600 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.198657 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.200533 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.200603 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.200654 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.200715 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.204114 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.211834 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.212284 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.212980 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.213072 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.213216 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.213374 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.214073 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vslcz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.214651 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.215109 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.215208 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.215819 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.216229 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.216858 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.219870 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8wxvz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.220556 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.221396 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.221801 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.230769 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.231461 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.234248 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.235890 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.237867 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnssx\" (UniqueName: \"kubernetes.io/projected/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-kube-api-access-mnssx\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.237932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-image-import-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238044 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-dir\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238072 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238103 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-client\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238123 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5nm\" (UniqueName: \"kubernetes.io/projected/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-kube-api-access-tc5nm\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-encryption-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238275 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjwz\" (UniqueName: \"kubernetes.io/projected/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-kube-api-access-dkjwz\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238317 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-node-pullsecrets\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238338 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsf69\" (UniqueName: \"kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238379 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238485 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-serving-cert\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238512 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-encryption-config\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238531 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-client\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238584 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238673 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238694 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-serving-cert\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238715 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238735 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.238779 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239018 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239041 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239075 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239097 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit-dir\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239266 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239307 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-policies\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtt6\" (UniqueName: \"kubernetes.io/projected/a8f9f0ff-9067-4555-873a-28815df1d4f6-kube-api-access-qmtt6\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239349 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.239370 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr44\" (UniqueName: \"kubernetes.io/projected/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-kube-api-access-mgr44\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.240346 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.240983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-node-pullsecrets\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.241687 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.242765 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.243626 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.244567 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.246901 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.247652 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.253829 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.255535 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.255943 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-etcd-client\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.261238 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.262783 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.264555 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.266274 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.266972 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q9f52"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.268292 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.268407 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.270448 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.270828 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.270858 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.271725 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.272396 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkvvm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.273225 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.274164 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46h8d"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.276094 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.277001 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.277992 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.279152 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.280129 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.281091 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.282082 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.282983 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.284308 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4fsnt"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.285598 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.286748 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7jkhw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.288695 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.289609 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.290117 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.292186 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.294667 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.296344 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8wxvz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.299568 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2x88"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.301184 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vj5h7"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.302854 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.303665 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.308449 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.308487 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.310029 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.310175 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.310950 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2hvlc"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.312364 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vslcz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.313740 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.316001 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.316966 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q9f52"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.318244 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c5s6t"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.319243 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.320260 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.322503 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jqxz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.323885 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.324983 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lbnxq"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.330401 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4fsnt"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.330599 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.330666 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.331630 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbnxq"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.333900 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jqxz"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.335625 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kcjkk"] Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.336712 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340113 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit-dir\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340158 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-policies\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340189 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtt6\" (UniqueName: \"kubernetes.io/projected/a8f9f0ff-9067-4555-873a-28815df1d4f6-kube-api-access-qmtt6\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340213 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340238 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr44\" (UniqueName: \"kubernetes.io/projected/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-kube-api-access-mgr44\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340271 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-image-import-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340290 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-dir\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340339 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnssx\" (UniqueName: \"kubernetes.io/projected/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-kube-api-access-mnssx\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340399 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5nm\" (UniqueName: \"kubernetes.io/projected/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-kube-api-access-tc5nm\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340460 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340482 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-encryption-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340517 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjwz\" (UniqueName: \"kubernetes.io/projected/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-kube-api-access-dkjwz\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340618 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-serving-cert\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340650 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-encryption-config\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340672 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-client\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340697 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340747 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-serving-cert\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340772 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340797 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340825 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.340845 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.341159 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-dir\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.341737 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-image-import-ca\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.342287 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-audit-dir\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.342656 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.342856 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.344072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.344287 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8f9f0ff-9067-4555-873a-28815df1d4f6-audit-policies\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.344319 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.344997 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.345676 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-encryption-config\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.345733 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-serving-cert\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.345707 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.352747 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-encryption-config\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.353195 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.353440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f9f0ff-9067-4555-873a-28815df1d4f6-etcd-client\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.354136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-serving-cert\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.370203 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.390192 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.409766 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.450933 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.472793 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.490424 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.525778 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.530765 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.550473 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.570661 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.598405 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.611123 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.630267 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.650684 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.671246 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.691831 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.712354 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.730387 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.751420 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.770359 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.790979 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.811892 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.831423 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.850953 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.870657 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.891698 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.911062 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.931193 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.951086 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.971337 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 20:40:37 crc kubenswrapper[4765]: I1203 20:40:37.991438 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.011098 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.031494 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.051586 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.070646 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.090912 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.111293 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.130856 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.151229 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.171188 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.191278 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.195930 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.211210 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.229036 4765 request.go:700] Waited for 1.013005831s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.230789 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.250981 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.279485 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.290491 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.312093 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.337958 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 20:40:38 crc kubenswrapper[4765]: E1203 20:40:38.342647 4765 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 20:40:38 crc kubenswrapper[4765]: E1203 20:40:38.342755 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls podName:27f01ae7-ddd0-4a0a-9d26-e86a4e50f411 nodeName:}" failed. No retries permitted until 2025-12-03 20:40:38.842726059 +0000 UTC m=+136.773271250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls") pod "machine-config-controller-84d6567774-zqbfv" (UID: "27f01ae7-ddd0-4a0a-9d26-e86a4e50f411") : failed to sync secret cache: timed out waiting for the condition Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.351602 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.371002 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.391569 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.412073 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.430953 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.451466 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.470445 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.506818 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.512355 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.531085 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.551699 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.570948 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.591802 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.612717 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.670776 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.671166 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsf69\" (UniqueName: \"kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69\") pod \"controller-manager-879f6c89f-gj4nd\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.691184 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.717945 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.730978 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.750373 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.771941 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.791269 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.811398 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.832026 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.850831 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.857609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.860947 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-proxy-tls\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.871699 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.891092 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.911426 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.928655 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.930855 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.951264 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.971917 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 20:40:38 crc kubenswrapper[4765]: I1203 20:40:38.991658 4765 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.011021 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.031196 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.051585 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.071585 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.090852 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.111618 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.130226 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.133915 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.150969 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.185143 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5nm\" (UniqueName: \"kubernetes.io/projected/d931ee2a-fefd-45cb-9cb6-4db4f3b20083-kube-api-access-tc5nm\") pod \"apiserver-76f77b778f-qkvvm\" (UID: \"d931ee2a-fefd-45cb-9cb6-4db4f3b20083\") " pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.205906 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjwz\" (UniqueName: \"kubernetes.io/projected/8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f-kube-api-access-dkjwz\") pod \"multus-admission-controller-857f4d67dd-vslcz\" (UID: \"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.231842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnssx\" (UniqueName: \"kubernetes.io/projected/27f01ae7-ddd0-4a0a-9d26-e86a4e50f411-kube-api-access-mnssx\") pod \"machine-config-controller-84d6567774-zqbfv\" (UID: \"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.243443 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr44\" (UniqueName: \"kubernetes.io/projected/4e5174d2-b2e5-4c8f-ad57-e08e06808fcc-kube-api-access-mgr44\") pod \"openshift-controller-manager-operator-756b6f6bc6-mm9nl\" (UID: \"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.249210 4765 request.go:700] Waited for 1.905213482s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.263281 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtt6\" (UniqueName: \"kubernetes.io/projected/a8f9f0ff-9067-4555-873a-28815df1d4f6-kube-api-access-qmtt6\") pod \"apiserver-7bbb656c7d-6g5wl\" (UID: \"a8f9f0ff-9067-4555-873a-28815df1d4f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.280739 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.286101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:39 crc kubenswrapper[4765]: I1203 20:40:39.456491 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkvvm"] Dec 03 20:40:39 crc kubenswrapper[4765]: W1203 20:40:39.467787 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd931ee2a_fefd_45cb_9cb6_4db4f3b20083.slice/crio-7a7f147821b5a87d4a17c3fd82488014441f3cf15406b5dd5a43ad2d83a7d314 WatchSource:0}: Error finding container 7a7f147821b5a87d4a17c3fd82488014441f3cf15406b5dd5a43ad2d83a7d314: Status 404 returned error can't find the container with id 7a7f147821b5a87d4a17c3fd82488014441f3cf15406b5dd5a43ad2d83a7d314 Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.253718 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.254769 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.257177 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.257309 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.257365 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.257721 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl"] Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.258355 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:40.758330657 +0000 UTC m=+138.688875808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.267965 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" event={"ID":"29c890bc-a753-4a38-b8d5-33098898333b","Type":"ContainerStarted","Data":"41eac0c5db7818a67877cf4531dd4258ab4fc2017fef796bfcd16a0b5d62c913"} Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.269287 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" event={"ID":"d931ee2a-fefd-45cb-9cb6-4db4f3b20083","Type":"ContainerStarted","Data":"7a7f147821b5a87d4a17c3fd82488014441f3cf15406b5dd5a43ad2d83a7d314"} Dec 03 20:40:40 crc kubenswrapper[4765]: W1203 20:40:40.270425 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8f9f0ff_9067_4555_873a_28815df1d4f6.slice/crio-e1a7d97c7ef3aad7a81c32aae680a4b080db2a446bc7e25271c6a18ed733e6d3 WatchSource:0}: Error finding container e1a7d97c7ef3aad7a81c32aae680a4b080db2a446bc7e25271c6a18ed733e6d3: Status 404 returned error can't find the container with id e1a7d97c7ef3aad7a81c32aae680a4b080db2a446bc7e25271c6a18ed733e6d3 Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.358754 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.358950 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.358984 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359009 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab43899d-622d-43b5-aef2-1bdc77e4b04d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359035 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19034462-4553-4fe9-ba44-6c2b4c6e17d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359081 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359106 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ggq\" (UniqueName: \"kubernetes.io/projected/594c65ad-a006-4b48-946f-e6e9fe9e3f13-kube-api-access-z2ggq\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359126 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359146 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d99dfc5-7325-42cc-b69d-97fb7afeb049-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359165 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359185 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58118968-5018-452e-ad64-2ae028378570-serving-cert\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359207 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-config\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359227 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqrs\" (UniqueName: \"kubernetes.io/projected/54a4db3f-a64d-4b52-8d7f-cece2602c57c-kube-api-access-9jqrs\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359247 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzq6\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-kube-api-access-6gzq6\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359271 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2jc\" (UniqueName: \"kubernetes.io/projected/1368046e-eb8f-4969-8698-aa8e0c72204a-kube-api-access-tt2jc\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359345 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-service-ca-bundle\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359370 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359390 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7689l\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f4e540f5-00a4-4e01-aa34-fb3a4c249677-metrics-tls\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359434 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-574ml\" (UniqueName: \"kubernetes.io/projected/7fe27b06-dcc9-41a3-9768-b84fc02e378f-kube-api-access-574ml\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhtz\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-kube-api-access-jhhtz\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359491 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359511 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7fe27b06-dcc9-41a3-9768-b84fc02e378f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359546 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/594c65ad-a006-4b48-946f-e6e9fe9e3f13-proxy-tls\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359585 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359629 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1368046e-eb8f-4969-8698-aa8e0c72204a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359667 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4db3f-a64d-4b52-8d7f-cece2602c57c-serving-cert\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359688 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359716 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359740 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78mn\" (UniqueName: \"kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359771 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-auth-proxy-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359791 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359814 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgcr\" (UniqueName: \"kubernetes.io/projected/349a2f61-6556-4ca3-b200-30613a913e11-kube-api-access-qtgcr\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359836 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19034462-4553-4fe9-ba44-6c2b4c6e17d3-config\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359856 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-srv-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359875 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-config\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359897 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033ee94-376d-4190-8e79-ce0d34031aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359919 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3d1ca02-bcba-4523-8713-12443cebf75d-tmpfs\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359941 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vsp\" (UniqueName: \"kubernetes.io/projected/ab43899d-622d-43b5-aef2-1bdc77e4b04d-kube-api-access-t4vsp\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359960 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-default-certificate\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.359983 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360005 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82868331-992e-4d1c-ba2c-8eabe62071f0-serving-cert\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360028 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d87ca8a-2073-4f34-8dd7-a02a348018e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-images\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360068 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-serving-cert\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360090 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab43899d-622d-43b5-aef2-1bdc77e4b04d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d99dfc5-7325-42cc-b69d-97fb7afeb049-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360133 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360153 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360174 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-metrics-certs\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnwd\" (UniqueName: \"kubernetes.io/projected/f4e540f5-00a4-4e01-aa34-fb3a4c249677-kube-api-access-chnwd\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360292 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19034462-4553-4fe9-ba44-6c2b4c6e17d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360372 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360392 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4db3f-a64d-4b52-8d7f-cece2602c57c-config\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360422 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-stats-auth\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360442 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbnfh\" (UniqueName: \"kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360463 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-config\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360602 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360626 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczh5\" (UniqueName: \"kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360660 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-profile-collector-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360681 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlxbk\" (UniqueName: \"kubernetes.io/projected/484861d9-076d-43a5-854c-aae7cf403e43-kube-api-access-mlxbk\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360701 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qc6m\" (UniqueName: \"kubernetes.io/projected/58118968-5018-452e-ad64-2ae028378570-kube-api-access-2qc6m\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360734 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7f24\" (UniqueName: \"kubernetes.io/projected/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-kube-api-access-v7f24\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360768 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360790 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-trusted-ca\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360824 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360846 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360869 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe27b06-dcc9-41a3-9768-b84fc02e378f-serving-cert\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360904 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-images\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360923 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360944 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.360981 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhsqz\" (UniqueName: \"kubernetes.io/projected/ccac6268-00a4-448f-a04d-2d0aad175726-kube-api-access-vhsqz\") pod \"downloads-7954f5f757-7jkhw\" (UID: \"ccac6268-00a4-448f-a04d-2d0aad175726\") " pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361004 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361027 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg58g\" (UniqueName: \"kubernetes.io/projected/82868331-992e-4d1c-ba2c-8eabe62071f0-kube-api-access-xg58g\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361051 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-webhook-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361073 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361164 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf498\" (UniqueName: \"kubernetes.io/projected/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-kube-api-access-jf498\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361184 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361207 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361240 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361259 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361280 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67vt\" (UniqueName: \"kubernetes.io/projected/1033ee94-376d-4190-8e79-ce0d34031aed-kube-api-access-b67vt\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361356 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzb6\" (UniqueName: \"kubernetes.io/projected/65a4be30-e857-49e0-8bef-1d24b338b5b2-kube-api-access-xkzb6\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361376 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361413 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361436 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a4be30-e857-49e0-8bef-1d24b338b5b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361456 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-service-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361496 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrdf\" (UniqueName: \"kubernetes.io/projected/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-kube-api-access-fkrdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361516 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-srv-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361534 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361571 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8cdk\" (UniqueName: \"kubernetes.io/projected/b3d1ca02-bcba-4523-8713-12443cebf75d-kube-api-access-h8cdk\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361593 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-config\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361614 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-apiservice-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361643 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d87ca8a-2073-4f34-8dd7-a02a348018e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361664 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349a2f61-6556-4ca3-b200-30613a913e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361699 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569fv\" (UniqueName: \"kubernetes.io/projected/6fce1062-dbe7-42f2-a519-d0bc96d9c16d-kube-api-access-569fv\") pod \"migrator-59844c95c7-kqrd4\" (UID: \"6fce1062-dbe7-42f2-a519-d0bc96d9c16d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361721 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361743 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d87ca8a-2073-4f34-8dd7-a02a348018e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d99dfc5-7325-42cc-b69d-97fb7afeb049-config\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361783 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9vl\" (UniqueName: \"kubernetes.io/projected/84594671-44e4-4db9-b215-dd0d596e7ac5-kube-api-access-tb9vl\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361803 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361837 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361857 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84594671-44e4-4db9-b215-dd0d596e7ac5-machine-approver-tls\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361883 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6w5q\" (UniqueName: \"kubernetes.io/projected/64642dc1-627c-4efd-8301-2cb6f6166e43-kube-api-access-b6w5q\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361910 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-client\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349a2f61-6556-4ca3-b200-30613a913e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361957 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.361982 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.371607 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:40.871582141 +0000 UTC m=+138.802127322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.372189 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468125 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569fv\" (UniqueName: \"kubernetes.io/projected/6fce1062-dbe7-42f2-a519-d0bc96d9c16d-kube-api-access-569fv\") pod \"migrator-59844c95c7-kqrd4\" (UID: \"6fce1062-dbe7-42f2-a519-d0bc96d9c16d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468158 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468189 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468211 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d87ca8a-2073-4f34-8dd7-a02a348018e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d99dfc5-7325-42cc-b69d-97fb7afeb049-config\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468251 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9vl\" (UniqueName: \"kubernetes.io/projected/84594671-44e4-4db9-b215-dd0d596e7ac5-kube-api-access-tb9vl\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468272 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468292 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84594671-44e4-4db9-b215-dd0d596e7ac5-machine-approver-tls\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468326 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349a2f61-6556-4ca3-b200-30613a913e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6w5q\" (UniqueName: \"kubernetes.io/projected/64642dc1-627c-4efd-8301-2cb6f6166e43-kube-api-access-b6w5q\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-client\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468384 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468399 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468419 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468442 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468457 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk25j\" (UniqueName: \"kubernetes.io/projected/468fea60-b89b-4a56-89c5-1e1f2586301c-kube-api-access-fk25j\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468476 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19034462-4553-4fe9-ba44-6c2b4c6e17d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468490 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab43899d-622d-43b5-aef2-1bdc77e4b04d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5998eb-7e4c-415b-9e2b-6198992d2027-cert\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468522 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468542 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ggq\" (UniqueName: \"kubernetes.io/projected/594c65ad-a006-4b48-946f-e6e9fe9e3f13-kube-api-access-z2ggq\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468564 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468662 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d99dfc5-7325-42cc-b69d-97fb7afeb049-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468702 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-config\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468718 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58118968-5018-452e-ad64-2ae028378570-serving-cert\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468734 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqrs\" (UniqueName: \"kubernetes.io/projected/54a4db3f-a64d-4b52-8d7f-cece2602c57c-kube-api-access-9jqrs\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gzq6\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-kube-api-access-6gzq6\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468780 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx7zq\" (UniqueName: \"kubernetes.io/projected/5c5998eb-7e4c-415b-9e2b-6198992d2027-kube-api-access-cx7zq\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468801 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v689\" (UniqueName: \"kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468819 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2jc\" (UniqueName: \"kubernetes.io/projected/1368046e-eb8f-4969-8698-aa8e0c72204a-kube-api-access-tt2jc\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468844 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-service-ca-bundle\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468871 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95a05559-52f4-4623-9a2d-7326ebf3d7bb-metrics-tls\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468916 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7689l\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468964 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f4e540f5-00a4-4e01-aa34-fb3a4c249677-metrics-tls\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.468988 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-574ml\" (UniqueName: \"kubernetes.io/projected/7fe27b06-dcc9-41a3-9768-b84fc02e378f-kube-api-access-574ml\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhtz\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-kube-api-access-jhhtz\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469054 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7fe27b06-dcc9-41a3-9768-b84fc02e378f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/594c65ad-a006-4b48-946f-e6e9fe9e3f13-proxy-tls\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469099 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469136 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1368046e-eb8f-4969-8698-aa8e0c72204a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469158 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469177 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469198 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-csi-data-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469220 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4db3f-a64d-4b52-8d7f-cece2602c57c-serving-cert\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469241 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469264 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f78mn\" (UniqueName: \"kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469285 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-socket-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469324 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-auth-proxy-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469346 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469372 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgcr\" (UniqueName: \"kubernetes.io/projected/349a2f61-6556-4ca3-b200-30613a913e11-kube-api-access-qtgcr\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469393 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19034462-4553-4fe9-ba44-6c2b4c6e17d3-config\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469415 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-srv-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469435 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-config\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469478 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033ee94-376d-4190-8e79-ce0d34031aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469503 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3d1ca02-bcba-4523-8713-12443cebf75d-tmpfs\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vsp\" (UniqueName: \"kubernetes.io/projected/ab43899d-622d-43b5-aef2-1bdc77e4b04d-kube-api-access-t4vsp\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469551 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-default-certificate\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469573 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82868331-992e-4d1c-ba2c-8eabe62071f0-serving-cert\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469600 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-plugins-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d87ca8a-2073-4f34-8dd7-a02a348018e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469645 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-images\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469666 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-serving-cert\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab43899d-622d-43b5-aef2-1bdc77e4b04d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469722 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d99dfc5-7325-42cc-b69d-97fb7afeb049-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469744 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469766 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-mountpoint-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469785 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30679830-f405-44ca-9575-ff37afd13189-signing-key\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469804 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-registration-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469828 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469852 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469873 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-metrics-certs\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469919 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chnwd\" (UniqueName: \"kubernetes.io/projected/f4e540f5-00a4-4e01-aa34-fb3a4c249677-kube-api-access-chnwd\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469940 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a05559-52f4-4623-9a2d-7326ebf3d7bb-config-volume\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19034462-4553-4fe9-ba44-6c2b4c6e17d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.469997 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470018 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4db3f-a64d-4b52-8d7f-cece2602c57c-config\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470074 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-stats-auth\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470101 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnfh\" (UniqueName: \"kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-config\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470162 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470213 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470236 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczh5\" (UniqueName: \"kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470262 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470286 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7f24\" (UniqueName: \"kubernetes.io/projected/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-kube-api-access-v7f24\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470348 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-profile-collector-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlxbk\" (UniqueName: \"kubernetes.io/projected/484861d9-076d-43a5-854c-aae7cf403e43-kube-api-access-mlxbk\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qc6m\" (UniqueName: \"kubernetes.io/projected/58118968-5018-452e-ad64-2ae028378570-kube-api-access-2qc6m\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.470700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471220 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19034462-4553-4fe9-ba44-6c2b4c6e17d3-config\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471410 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471736 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471765 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-trusted-ca\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471781 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471799 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471819 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qlm\" (UniqueName: \"kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471838 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe27b06-dcc9-41a3-9768-b84fc02e378f-serving-cert\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471875 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-images\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471892 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471911 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471938 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhsqz\" (UniqueName: \"kubernetes.io/projected/ccac6268-00a4-448f-a04d-2d0aad175726-kube-api-access-vhsqz\") pod \"downloads-7954f5f757-7jkhw\" (UID: \"ccac6268-00a4-448f-a04d-2d0aad175726\") " pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471962 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.471986 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg58g\" (UniqueName: \"kubernetes.io/projected/82868331-992e-4d1c-ba2c-8eabe62071f0-kube-api-access-xg58g\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472007 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-node-bootstrap-token\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472025 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-webhook-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472043 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqvg\" (UniqueName: \"kubernetes.io/projected/95a05559-52f4-4623-9a2d-7326ebf3d7bb-kube-api-access-8gqvg\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472060 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472076 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472098 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472119 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf498\" (UniqueName: \"kubernetes.io/projected/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-kube-api-access-jf498\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472137 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472154 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472180 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472197 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472215 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472244 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472261 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67vt\" (UniqueName: \"kubernetes.io/projected/1033ee94-376d-4190-8e79-ce0d34031aed-kube-api-access-b67vt\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472281 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzb6\" (UniqueName: \"kubernetes.io/projected/65a4be30-e857-49e0-8bef-1d24b338b5b2-kube-api-access-xkzb6\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472314 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472331 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30679830-f405-44ca-9575-ff37afd13189-signing-cabundle\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472350 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xtw\" (UniqueName: \"kubernetes.io/projected/286ddd24-e2a1-407b-95ba-5af10398ebb0-kube-api-access-q9xtw\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472365 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472385 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472403 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a4be30-e857-49e0-8bef-1d24b338b5b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472420 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-service-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472444 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrdf\" (UniqueName: \"kubernetes.io/projected/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-kube-api-access-fkrdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbqtx\" (UniqueName: \"kubernetes.io/projected/30679830-f405-44ca-9575-ff37afd13189-kube-api-access-fbqtx\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472488 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-srv-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472512 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472528 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-certs\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472544 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-config\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8cdk\" (UniqueName: \"kubernetes.io/projected/b3d1ca02-bcba-4523-8713-12443cebf75d-kube-api-access-h8cdk\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472579 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-apiservice-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472604 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d87ca8a-2073-4f34-8dd7-a02a348018e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472603 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d99dfc5-7325-42cc-b69d-97fb7afeb049-config\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.472625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349a2f61-6556-4ca3-b200-30613a913e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.474605 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.474746 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a4db3f-a64d-4b52-8d7f-cece2602c57c-config\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.475321 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.475808 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.476969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19034462-4553-4fe9-ba44-6c2b4c6e17d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.477884 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.479657 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-webhook-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.480231 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab43899d-622d-43b5-aef2-1bdc77e4b04d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.481537 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.482119 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.482388 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/65a4be30-e857-49e0-8bef-1d24b338b5b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.482892 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-service-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.483154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.483842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d99dfc5-7325-42cc-b69d-97fb7afeb049-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.484376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.484715 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/84594671-44e4-4db9-b215-dd0d596e7ac5-machine-approver-tls\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.484807 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-config\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.487349 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.487688 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-client\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.487793 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.488187 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.488353 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58118968-5018-452e-ad64-2ae028378570-serving-cert\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.488912 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-service-ca-bundle\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.490184 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-images\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.490393 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.492068 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-trusted-ca\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.495046 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-etcd-ca\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.495449 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-metrics-certs\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.495742 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.495878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.496458 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.496483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58118968-5018-452e-ad64-2ae028378570-config\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.496579 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349a2f61-6556-4ca3-b200-30613a913e11-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.496613 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1033ee94-376d-4190-8e79-ce0d34031aed-config\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.497468 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349a2f61-6556-4ca3-b200-30613a913e11-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.497504 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.499154 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:40.999143011 +0000 UTC m=+138.929688162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.500055 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7fe27b06-dcc9-41a3-9768-b84fc02e378f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.502937 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.503331 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.503969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.504248 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d87ca8a-2073-4f34-8dd7-a02a348018e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.504653 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.504885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.505259 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.506964 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82868331-992e-4d1c-ba2c-8eabe62071f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.508472 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.508499 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.509646 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.510290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84594671-44e4-4db9-b215-dd0d596e7ac5-auth-proxy-config\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.510417 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-config\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.511197 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-srv-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.511706 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.511730 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d87ca8a-2073-4f34-8dd7-a02a348018e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.512653 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab43899d-622d-43b5-aef2-1bdc77e4b04d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.512893 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b3d1ca02-bcba-4523-8713-12443cebf75d-tmpfs\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.513885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.516854 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.517954 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/594c65ad-a006-4b48-946f-e6e9fe9e3f13-proxy-tls\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.517712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1368046e-eb8f-4969-8698-aa8e0c72204a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.517670 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chnwd\" (UniqueName: \"kubernetes.io/projected/f4e540f5-00a4-4e01-aa34-fb3a4c249677-kube-api-access-chnwd\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.518054 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3d1ca02-bcba-4523-8713-12443cebf75d-apiservice-cert\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.518766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-stats-auth\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.518945 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f4e540f5-00a4-4e01-aa34-fb3a4c249677-metrics-tls\") pod \"dns-operator-744455d44c-t2x88\" (UID: \"f4e540f5-00a4-4e01-aa34-fb3a4c249677\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.521190 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg58g\" (UniqueName: \"kubernetes.io/projected/82868331-992e-4d1c-ba2c-8eabe62071f0-kube-api-access-xg58g\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.521421 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d87ca8a-2073-4f34-8dd7-a02a348018e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2bp7f\" (UID: \"2d87ca8a-2073-4f34-8dd7-a02a348018e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.521672 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.521734 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54a4db3f-a64d-4b52-8d7f-cece2602c57c-serving-cert\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.522261 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6w5q\" (UniqueName: \"kubernetes.io/projected/64642dc1-627c-4efd-8301-2cb6f6166e43-kube-api-access-b6w5q\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.523773 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-srv-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.524104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569fv\" (UniqueName: \"kubernetes.io/projected/6fce1062-dbe7-42f2-a519-d0bc96d9c16d-kube-api-access-569fv\") pod \"migrator-59844c95c7-kqrd4\" (UID: \"6fce1062-dbe7-42f2-a519-d0bc96d9c16d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.526451 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-574ml\" (UniqueName: \"kubernetes.io/projected/7fe27b06-dcc9-41a3-9768-b84fc02e378f-kube-api-access-574ml\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.528579 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2jc\" (UniqueName: \"kubernetes.io/projected/1368046e-eb8f-4969-8698-aa8e0c72204a-kube-api-access-tt2jc\") pod \"cluster-samples-operator-665b6dd947-2sm67\" (UID: \"1368046e-eb8f-4969-8698-aa8e0c72204a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.530273 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.529717 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82868331-992e-4d1c-ba2c-8eabe62071f0-serving-cert\") pod \"authentication-operator-69f744f599-8wxvz\" (UID: \"82868331-992e-4d1c-ba2c-8eabe62071f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.530775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrdf\" (UniqueName: \"kubernetes.io/projected/bd1c3235-df64-48e7-9c08-e7ee70c8fe49-kube-api-access-fkrdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-qv6v8\" (UID: \"bd1c3235-df64-48e7-9c08-e7ee70c8fe49\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.530966 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7689l\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.532751 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-serving-cert\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.532965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7f24\" (UniqueName: \"kubernetes.io/projected/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-kube-api-access-v7f24\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.533133 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgcr\" (UniqueName: \"kubernetes.io/projected/349a2f61-6556-4ca3-b200-30613a913e11-kube-api-access-qtgcr\") pod \"kube-storage-version-migrator-operator-b67b599dd-vwm56\" (UID: \"349a2f61-6556-4ca3-b200-30613a913e11\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.534359 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8cdk\" (UniqueName: \"kubernetes.io/projected/b3d1ca02-bcba-4523-8713-12443cebf75d-kube-api-access-h8cdk\") pod \"packageserver-d55dfcdfc-645jw\" (UID: \"b3d1ca02-bcba-4523-8713-12443cebf75d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.534704 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/594c65ad-a006-4b48-946f-e6e9fe9e3f13-images\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.534741 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlxbk\" (UniqueName: \"kubernetes.io/projected/484861d9-076d-43a5-854c-aae7cf403e43-kube-api-access-mlxbk\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.536082 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.536832 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gzq6\" (UniqueName: \"kubernetes.io/projected/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-kube-api-access-6gzq6\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.537252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczh5\" (UniqueName: \"kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.539431 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64642dc1-627c-4efd-8301-2cb6f6166e43-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9fdfw\" (UID: \"64642dc1-627c-4efd-8301-2cb6f6166e43\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.539661 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.540647 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f78mn\" (UniqueName: \"kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.540678 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67vt\" (UniqueName: \"kubernetes.io/projected/1033ee94-376d-4190-8e79-ce0d34031aed-kube-api-access-b67vt\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.541435 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzb6\" (UniqueName: \"kubernetes.io/projected/65a4be30-e857-49e0-8bef-1d24b338b5b2-kube-api-access-xkzb6\") pod \"package-server-manager-789f6589d5-c2bj5\" (UID: \"65a4be30-e857-49e0-8bef-1d24b338b5b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.541864 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9vl\" (UniqueName: \"kubernetes.io/projected/84594671-44e4-4db9-b215-dd0d596e7ac5-kube-api-access-tb9vl\") pod \"machine-approver-56656f9798-5c4vp\" (UID: \"84594671-44e4-4db9-b215-dd0d596e7ac5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.541929 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vsp\" (UniqueName: \"kubernetes.io/projected/ab43899d-622d-43b5-aef2-1bdc77e4b04d-kube-api-access-t4vsp\") pod \"openshift-apiserver-operator-796bbdcf4f-nhpxz\" (UID: \"ab43899d-622d-43b5-aef2-1bdc77e4b04d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.541935 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnfh\" (UniqueName: \"kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh\") pod \"route-controller-manager-6576b87f9c-4wldp\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.542017 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhsqz\" (UniqueName: \"kubernetes.io/projected/ccac6268-00a4-448f-a04d-2d0aad175726-kube-api-access-vhsqz\") pod \"downloads-7954f5f757-7jkhw\" (UID: \"ccac6268-00a4-448f-a04d-2d0aad175726\") " pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.543357 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19034462-4553-4fe9-ba44-6c2b4c6e17d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2bvfx\" (UID: \"19034462-4553-4fe9-ba44-6c2b4c6e17d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.543908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqrs\" (UniqueName: \"kubernetes.io/projected/54a4db3f-a64d-4b52-8d7f-cece2602c57c-kube-api-access-9jqrs\") pod \"service-ca-operator-777779d784-l2qhf\" (UID: \"54a4db3f-a64d-4b52-8d7f-cece2602c57c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.544803 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fe27b06-dcc9-41a3-9768-b84fc02e378f-serving-cert\") pod \"openshift-config-operator-7777fb866f-dp9jm\" (UID: \"7fe27b06-dcc9-41a3-9768-b84fc02e378f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.544846 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.545333 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf498\" (UniqueName: \"kubernetes.io/projected/9c60f706-afe3-43f0-a8b6-f1d8003a0d82-kube-api-access-jf498\") pod \"etcd-operator-b45778765-c5s6t\" (UID: \"9c60f706-afe3-43f0-a8b6-f1d8003a0d82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.545379 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd21beb3-07f3-450b-9c58-8edc8ef9b9ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vd9qv\" (UID: \"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.545812 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/484861d9-076d-43a5-854c-aae7cf403e43-profile-collector-cert\") pod \"catalog-operator-68c6474976-n54j6\" (UID: \"484861d9-076d-43a5-854c-aae7cf403e43\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.545943 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhtz\" (UniqueName: \"kubernetes.io/projected/72bcf519-7707-4f5f-b7f2-9a77cdfe292e-kube-api-access-jhhtz\") pod \"ingress-operator-5b745b69d9-qg9sx\" (UID: \"72bcf519-7707-4f5f-b7f2-9a77cdfe292e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.546172 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ggq\" (UniqueName: \"kubernetes.io/projected/594c65ad-a006-4b48-946f-e6e9fe9e3f13-kube-api-access-z2ggq\") pod \"machine-config-operator-74547568cd-7bgqb\" (UID: \"594c65ad-a006-4b48-946f-e6e9fe9e3f13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.546453 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.546834 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2hvlc\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.547033 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033ee94-376d-4190-8e79-ce0d34031aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vj5h7\" (UID: \"1033ee94-376d-4190-8e79-ce0d34031aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.547402 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config\") pod \"console-f9d7485db-stgcm\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.548105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qc6m\" (UniqueName: \"kubernetes.io/projected/58118968-5018-452e-ad64-2ae028378570-kube-api-access-2qc6m\") pod \"console-operator-58897d9998-46h8d\" (UID: \"58118968-5018-452e-ad64-2ae028378570\") " pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.549763 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e8e4fc55-d165-4961-90bf-1e6ecbdf09da-default-certificate\") pod \"router-default-5444994796-l6r9q\" (UID: \"e8e4fc55-d165-4961-90bf-1e6ecbdf09da\") " pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.554771 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d99dfc5-7325-42cc-b69d-97fb7afeb049-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dt8qz\" (UID: \"6d99dfc5-7325-42cc-b69d-97fb7afeb049\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.561579 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576009 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576268 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk25j\" (UniqueName: \"kubernetes.io/projected/468fea60-b89b-4a56-89c5-1e1f2586301c-kube-api-access-fk25j\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576318 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576339 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5998eb-7e4c-415b-9e2b-6198992d2027-cert\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576363 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx7zq\" (UniqueName: \"kubernetes.io/projected/5c5998eb-7e4c-415b-9e2b-6198992d2027-kube-api-access-cx7zq\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576382 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v689\" (UniqueName: \"kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576405 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95a05559-52f4-4623-9a2d-7326ebf3d7bb-metrics-tls\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576431 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-csi-data-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576451 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-socket-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576474 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-plugins-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576492 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-mountpoint-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576510 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30679830-f405-44ca-9575-ff37afd13189-signing-key\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576531 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-registration-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576584 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a05559-52f4-4623-9a2d-7326ebf3d7bb-config-volume\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576620 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576659 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qlm\" (UniqueName: \"kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576680 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-node-bootstrap-token\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576699 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gqvg\" (UniqueName: \"kubernetes.io/projected/95a05559-52f4-4623-9a2d-7326ebf3d7bb-kube-api-access-8gqvg\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576727 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30679830-f405-44ca-9575-ff37afd13189-signing-cabundle\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576744 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xtw\" (UniqueName: \"kubernetes.io/projected/286ddd24-e2a1-407b-95ba-5af10398ebb0-kube-api-access-q9xtw\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576763 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576785 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbqtx\" (UniqueName: \"kubernetes.io/projected/30679830-f405-44ca-9575-ff37afd13189-kube-api-access-fbqtx\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.576809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-certs\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.577814 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.077786659 +0000 UTC m=+139.008331810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.578926 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30679830-f405-44ca-9575-ff37afd13189-signing-cabundle\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.580038 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.580423 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-registration-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.580906 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-plugins-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.580973 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-csi-data-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.581019 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-socket-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.581060 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/286ddd24-e2a1-407b-95ba-5af10398ebb0-mountpoint-dir\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.581398 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a05559-52f4-4623-9a2d-7326ebf3d7bb-config-volume\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.582249 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.582820 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.583279 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5998eb-7e4c-415b-9e2b-6198992d2027-cert\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.584644 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.584712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95a05559-52f4-4623-9a2d-7326ebf3d7bb-metrics-tls\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.584773 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30679830-f405-44ca-9575-ff37afd13189-signing-key\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.585439 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.587876 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-node-bootstrap-token\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.589707 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/468fea60-b89b-4a56-89c5-1e1f2586301c-certs\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.591912 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.606929 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.615918 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.615928 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk25j\" (UniqueName: \"kubernetes.io/projected/468fea60-b89b-4a56-89c5-1e1f2586301c-kube-api-access-fk25j\") pod \"machine-config-server-kcjkk\" (UID: \"468fea60-b89b-4a56-89c5-1e1f2586301c\") " pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.621108 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.629383 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gqvg\" (UniqueName: \"kubernetes.io/projected/95a05559-52f4-4623-9a2d-7326ebf3d7bb-kube-api-access-8gqvg\") pod \"dns-default-4fsnt\" (UID: \"95a05559-52f4-4623-9a2d-7326ebf3d7bb\") " pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.630193 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.652107 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.655713 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.662575 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xtw\" (UniqueName: \"kubernetes.io/projected/286ddd24-e2a1-407b-95ba-5af10398ebb0-kube-api-access-q9xtw\") pod \"csi-hostpathplugin-4jqxz\" (UID: \"286ddd24-e2a1-407b-95ba-5af10398ebb0\") " pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.672592 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.680909 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.681196 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.181184152 +0000 UTC m=+139.111729303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.686683 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.687492 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qlm\" (UniqueName: \"kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm\") pod \"marketplace-operator-79b997595-984s7\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.694053 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.700094 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.706495 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.711190 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx7zq\" (UniqueName: \"kubernetes.io/projected/5c5998eb-7e4c-415b-9e2b-6198992d2027-kube-api-access-cx7zq\") pod \"ingress-canary-lbnxq\" (UID: \"5c5998eb-7e4c-415b-9e2b-6198992d2027\") " pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.714005 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.719974 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.726060 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.727035 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbqtx\" (UniqueName: \"kubernetes.io/projected/30679830-f405-44ca-9575-ff37afd13189-kube-api-access-fbqtx\") pod \"service-ca-9c57cc56f-q9f52\" (UID: \"30679830-f405-44ca-9575-ff37afd13189\") " pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.736604 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.741677 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.744462 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8wxvz"] Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.748108 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.748758 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v689\" (UniqueName: \"kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689\") pod \"collect-profiles-29413230-bdsvs\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.772909 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.782374 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.782512 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.282485225 +0000 UTC m=+139.213030376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.782771 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.783186 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.283177214 +0000 UTC m=+139.213722365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.784771 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.800122 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl"] Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.804497 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.807286 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.813684 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv"] Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.831786 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.833340 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vslcz"] Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.835670 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.848628 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.853755 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.861096 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.871930 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.883658 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.883994 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.383974673 +0000 UTC m=+139.314519814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.894703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.902856 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbnxq" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.910193 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kcjkk" Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.974514 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw"] Dec 03 20:40:40 crc kubenswrapper[4765]: I1203 20:40:40.985618 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:40 crc kubenswrapper[4765]: E1203 20:40:40.985888 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.485877834 +0000 UTC m=+139.416422985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.003488 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.086613 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.086787 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.586766435 +0000 UTC m=+139.517311586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.086843 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.087156 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.587143806 +0000 UTC m=+139.517688957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.140277 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.187924 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.188066 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.688041367 +0000 UTC m=+139.618586518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.188374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.188834 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.688818399 +0000 UTC m=+139.619363580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.214162 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46h8d"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.255601 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.260635 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vj5h7"] Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.268852 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82868331_992e_4d1c_ba2c_8eabe62071f0.slice/crio-c8db8edc5bb241cc0433056a7e7e400710a0247a34318376185d485e7934dab2 WatchSource:0}: Error finding container c8db8edc5bb241cc0433056a7e7e400710a0247a34318376185d485e7934dab2: Status 404 returned error can't find the container with id c8db8edc5bb241cc0433056a7e7e400710a0247a34318376185d485e7934dab2 Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.269200 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e5174d2_b2e5_4c8f_ad57_e08e06808fcc.slice/crio-d42094b7351b4867170f8f761a564a9e3c097116cba34a0dccac1774c3c74b31 WatchSource:0}: Error finding container d42094b7351b4867170f8f761a564a9e3c097116cba34a0dccac1774c3c74b31: Status 404 returned error can't find the container with id d42094b7351b4867170f8f761a564a9e3c097116cba34a0dccac1774c3c74b31 Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.279241 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f01ae7_ddd0_4a0a_9d26_e86a4e50f411.slice/crio-27ce53ce6e9c1f626228b8f413d2941bac3cff9f586474cf802891dd4c5f923d WatchSource:0}: Error finding container 27ce53ce6e9c1f626228b8f413d2941bac3cff9f586474cf802891dd4c5f923d: Status 404 returned error can't find the container with id 27ce53ce6e9c1f626228b8f413d2941bac3cff9f586474cf802891dd4c5f923d Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.283770 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ddf1877_d8b6_44e0_8b1d_45cb0e780b1f.slice/crio-1197e0f199feb67b55643d800a6713dc671929d40360c6141f718b28f77b61d1 WatchSource:0}: Error finding container 1197e0f199feb67b55643d800a6713dc671929d40360c6141f718b28f77b61d1: Status 404 returned error can't find the container with id 1197e0f199feb67b55643d800a6713dc671929d40360c6141f718b28f77b61d1 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.284826 4765 generic.go:334] "Generic (PLEG): container finished" podID="d931ee2a-fefd-45cb-9cb6-4db4f3b20083" containerID="b45bad3163cd8ce8a0f2f633f449267cbaee677ae914a7118ea18e672a97ad23" exitCode=0 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.284924 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" event={"ID":"d931ee2a-fefd-45cb-9cb6-4db4f3b20083","Type":"ContainerDied","Data":"b45bad3163cd8ce8a0f2f633f449267cbaee677ae914a7118ea18e672a97ad23"} Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.287038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" event={"ID":"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc","Type":"ContainerStarted","Data":"d42094b7351b4867170f8f761a564a9e3c097116cba34a0dccac1774c3c74b31"} Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.288274 4765 generic.go:334] "Generic (PLEG): container finished" podID="a8f9f0ff-9067-4555-873a-28815df1d4f6" containerID="1746bf29b8dd3127681d49abba5c5971a6a98bf743f6ed96da2753c318891430" exitCode=0 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.288334 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" event={"ID":"a8f9f0ff-9067-4555-873a-28815df1d4f6","Type":"ContainerDied","Data":"1746bf29b8dd3127681d49abba5c5971a6a98bf743f6ed96da2753c318891430"} Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.288350 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" event={"ID":"a8f9f0ff-9067-4555-873a-28815df1d4f6","Type":"ContainerStarted","Data":"e1a7d97c7ef3aad7a81c32aae680a4b080db2a446bc7e25271c6a18ed733e6d3"} Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.289017 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.289373 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.789361891 +0000 UTC m=+139.719907032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.291957 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.309650 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" event={"ID":"29c890bc-a753-4a38-b8d5-33098898333b","Type":"ContainerStarted","Data":"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2"} Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.311053 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.314490 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1c3235_df64_48e7_9c08_e7ee70c8fe49.slice/crio-5d9a8e650d8696589eb228d912a588ef1d49907f38a00038b473eabe50efadd5 WatchSource:0}: Error finding container 5d9a8e650d8696589eb228d912a588ef1d49907f38a00038b473eabe50efadd5: Status 404 returned error can't find the container with id 5d9a8e650d8696589eb228d912a588ef1d49907f38a00038b473eabe50efadd5 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.317803 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.320964 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.321127 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.324149 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.394474 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.397291 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:41.897274702 +0000 UTC m=+139.827819853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.411646 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d99dfc5_7325_42cc_b69d_97fb7afeb049.slice/crio-149e8c16eaba0e07ced764482218bfdbef5a262de864cc84644f94658217a215 WatchSource:0}: Error finding container 149e8c16eaba0e07ced764482218bfdbef5a262de864cc84644f94658217a215: Status 404 returned error can't find the container with id 149e8c16eaba0e07ced764482218bfdbef5a262de864cc84644f94658217a215 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.482101 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.500324 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.501085 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.001040805 +0000 UTC m=+139.931585956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.501331 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.503738 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.003719012 +0000 UTC m=+139.934264163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.512203 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.602002 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.602537 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.102523003 +0000 UTC m=+140.033068154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.638874 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2x88"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.658408 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7jkhw"] Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.704102 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.704464 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.204448434 +0000 UTC m=+140.134993585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.777725 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" podStartSLOduration=120.777705139 podStartE2EDuration="2m0.777705139s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:41.772558873 +0000 UTC m=+139.703104044" watchObservedRunningTime="2025-12-03 20:40:41.777705139 +0000 UTC m=+139.708250300" Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.805126 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.806114 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.306093747 +0000 UTC m=+140.236638898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:41 crc kubenswrapper[4765]: W1203 20:40:41.875067 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e540f5_00a4_4e01_aa34_fb3a4c249677.slice/crio-f524a93852a8a44e01270db663bacf656a9a60fa163bda764d4f939f78cf4443 WatchSource:0}: Error finding container f524a93852a8a44e01270db663bacf656a9a60fa163bda764d4f939f78cf4443: Status 404 returned error can't find the container with id f524a93852a8a44e01270db663bacf656a9a60fa163bda764d4f939f78cf4443 Dec 03 20:40:41 crc kubenswrapper[4765]: I1203 20:40:41.907048 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:41 crc kubenswrapper[4765]: E1203 20:40:41.907433 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.407420091 +0000 UTC m=+140.337965242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.008538 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.008900 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.508886428 +0000 UTC m=+140.439431579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.110653 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.111088 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.611072366 +0000 UTC m=+140.541617517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.125819 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.212286 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.212412 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.71238781 +0000 UTC m=+140.642932961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.212605 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.212867 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.712858264 +0000 UTC m=+140.643403415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.315114 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.318437 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.818406827 +0000 UTC m=+140.748951988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.318573 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.319002 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.818993704 +0000 UTC m=+140.749538855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.326708 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kcjkk" event={"ID":"468fea60-b89b-4a56-89c5-1e1f2586301c","Type":"ContainerStarted","Data":"eba1c81d154a3682d866370b6f87199e9e47cfac60fb4d021448456bb72f3257"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.334133 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" event={"ID":"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f","Type":"ContainerStarted","Data":"1197e0f199feb67b55643d800a6713dc671929d40360c6141f718b28f77b61d1"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.335612 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" event={"ID":"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411","Type":"ContainerStarted","Data":"27ce53ce6e9c1f626228b8f413d2941bac3cff9f586474cf802891dd4c5f923d"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.346939 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" event={"ID":"bd1c3235-df64-48e7-9c08-e7ee70c8fe49","Type":"ContainerStarted","Data":"5d9a8e650d8696589eb228d912a588ef1d49907f38a00038b473eabe50efadd5"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.417789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" event={"ID":"f4e540f5-00a4-4e01-aa34-fb3a4c249677","Type":"ContainerStarted","Data":"f524a93852a8a44e01270db663bacf656a9a60fa163bda764d4f939f78cf4443"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.417832 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" event={"ID":"6d99dfc5-7325-42cc-b69d-97fb7afeb049","Type":"ContainerStarted","Data":"149e8c16eaba0e07ced764482218bfdbef5a262de864cc84644f94658217a215"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.417843 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" event={"ID":"64642dc1-627c-4efd-8301-2cb6f6166e43","Type":"ContainerStarted","Data":"563ca1278c76e5c80f6f450d09d2aafb4ba83b35d1971b0d763fef55e7ad6514"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.418893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" event={"ID":"1033ee94-376d-4190-8e79-ce0d34031aed","Type":"ContainerStarted","Data":"c47850562ef5e54565463c7c27ca3291a7c3af2505eef3d26daa035996feb757"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.418954 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.419011 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.91899704 +0000 UTC m=+140.849542191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.419472 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.419863 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:42.919856195 +0000 UTC m=+140.850401346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.421472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" event={"ID":"72bcf519-7707-4f5f-b7f2-9a77cdfe292e","Type":"ContainerStarted","Data":"2276202025aaeeafbb4679e50dc488fd8af8fcb243f54a7fdf8701f8fbc12886"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.422571 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7jkhw" event={"ID":"ccac6268-00a4-448f-a04d-2d0aad175726","Type":"ContainerStarted","Data":"f8189f39da6dc1fad76e9a1b594df614a31e9953031a3c3ab9fad837c4f01d9a"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.429948 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" event={"ID":"ab43899d-622d-43b5-aef2-1bdc77e4b04d","Type":"ContainerStarted","Data":"eda321449b977e8b551a2d5ba5ec51e224e16a7aa0d24c38a9db226d0e60440b"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.438195 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-46h8d" event={"ID":"58118968-5018-452e-ad64-2ae028378570","Type":"ContainerStarted","Data":"535bc81189f9cc134636cab494cb4bd6e46fb5271d3f275201533bf7f0e49aad"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.439649 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" event={"ID":"484861d9-076d-43a5-854c-aae7cf403e43","Type":"ContainerStarted","Data":"64c58c64c21b88147f75adfbdfaf4af707a90d2d0bc1340332e3e560543a0e56"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.440338 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" event={"ID":"82868331-992e-4d1c-ba2c-8eabe62071f0","Type":"ContainerStarted","Data":"c8db8edc5bb241cc0433056a7e7e400710a0247a34318376185d485e7934dab2"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.441060 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" event={"ID":"1368046e-eb8f-4969-8698-aa8e0c72204a","Type":"ContainerStarted","Data":"cb6cd1844a81b8e277f1680820d8d93028e9d8619b4885d03356b3a9e295668b"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.442509 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" event={"ID":"c25824b2-7d4e-4fdd-ac80-d2975d802570","Type":"ContainerStarted","Data":"34928517784ad9713adf07bf2d991c330d6fab9b8e1490a902e76975241aa6d4"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.443453 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-stgcm" event={"ID":"de590c28-833f-4c0b-9184-62a37519a9e0","Type":"ContainerStarted","Data":"9ace2f839b02af642e5a57975d8971f4197113ac9a05e65668da66e2bf8fb466"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.444137 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" event={"ID":"84594671-44e4-4db9-b215-dd0d596e7ac5","Type":"ContainerStarted","Data":"24fdb8ead11bda133cf229fba4bbd5dfdba9542cd9edff009a99d1a81fc8656d"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.445059 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" event={"ID":"594c65ad-a006-4b48-946f-e6e9fe9e3f13","Type":"ContainerStarted","Data":"7cfbedcee75a7254ed4161d02156b714ad1306e249f136d195eb68ba1a6af31f"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.453891 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l6r9q" event={"ID":"e8e4fc55-d165-4961-90bf-1e6ecbdf09da","Type":"ContainerStarted","Data":"5590077833d1f5c7bdf93122baf1d1c2c0c5e31f60acf7c202d6374655928809"} Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.520662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.520791 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.020775187 +0000 UTC m=+140.951320328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.521097 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.522448 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.022433874 +0000 UTC m=+140.952979025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.634237 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.635612 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.135594925 +0000 UTC m=+141.066140076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.635716 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.636060 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.136047878 +0000 UTC m=+141.066593019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.668637 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.682123 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2hvlc"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.688637 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.705222 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.720944 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.729414 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv"] Dec 03 20:40:42 crc kubenswrapper[4765]: W1203 20:40:42.733466 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30679830_f405_44ca_9575_ff37afd13189.slice/crio-aeae6c520aea8498bb37726e6af6c71620b1338b93886fec9ff2ef4b7ae10f66 WatchSource:0}: Error finding container aeae6c520aea8498bb37726e6af6c71620b1338b93886fec9ff2ef4b7ae10f66: Status 404 returned error can't find the container with id aeae6c520aea8498bb37726e6af6c71620b1338b93886fec9ff2ef4b7ae10f66 Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.737055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.741709 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c5s6t"] Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.745462 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.245437651 +0000 UTC m=+141.175982802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.745525 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-q9f52"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.755784 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf"] Dec 03 20:40:42 crc kubenswrapper[4765]: W1203 20:40:42.797878 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21beb3_07f3_450b_9c58_8edc8ef9b9ec.slice/crio-0aa10fad4893885c8e1bb307afd96b0a9b998fd43468f6d9cb3d18194088202a WatchSource:0}: Error finding container 0aa10fad4893885c8e1bb307afd96b0a9b998fd43468f6d9cb3d18194088202a: Status 404 returned error can't find the container with id 0aa10fad4893885c8e1bb307afd96b0a9b998fd43468f6d9cb3d18194088202a Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.821776 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.823430 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.846784 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.847058 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.347046322 +0000 UTC m=+141.277591473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.863493 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4fsnt"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.877101 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.887871 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbnxq"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.895137 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs"] Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.947099 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.947352 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.447330347 +0000 UTC m=+141.377875498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:42 crc kubenswrapper[4765]: I1203 20:40:42.947854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:42 crc kubenswrapper[4765]: E1203 20:40:42.948083 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.448072127 +0000 UTC m=+141.378617278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.037946 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4jqxz"] Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.048912 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.049417 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.549401941 +0000 UTC m=+141.479947092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.152143 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.152600 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.652565097 +0000 UTC m=+141.583110248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.254156 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.254520 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.754503559 +0000 UTC m=+141.685048710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.366795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.367864 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.867849485 +0000 UTC m=+141.798394636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.469835 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.470552 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:43.970528687 +0000 UTC m=+141.901073838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.481670 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" event={"ID":"b3d1ca02-bcba-4523-8713-12443cebf75d","Type":"ContainerStarted","Data":"2873b4541dd1ac47a226abe1e63a5477cfaeb7eb8f497c3356d89acb24839f6e"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.481846 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" event={"ID":"b3d1ca02-bcba-4523-8713-12443cebf75d","Type":"ContainerStarted","Data":"9dff746c828dd5059ac3a43b191387e934f9a1de6bb3eb15ed4dc3e4f5750bca"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.483180 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.489723 4765 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-645jw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.489776 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" podUID="b3d1ca02-bcba-4523-8713-12443cebf75d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.512170 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" event={"ID":"f4e540f5-00a4-4e01-aa34-fb3a4c249677","Type":"ContainerStarted","Data":"f0705854082e0c28fcbf91bbf1e5a1c07aeb561fd62a78bbe45dd9d00ee710ae"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.512956 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" event={"ID":"6fce1062-dbe7-42f2-a519-d0bc96d9c16d","Type":"ContainerStarted","Data":"07da2123e399e714af9d294414afd3a51ae9ddd17c01ac90e75ed50fbdab51f6"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.513828 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" event={"ID":"70f44f4f-8e44-460a-9696-5af11fc75a95","Type":"ContainerStarted","Data":"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.513845 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" event={"ID":"70f44f4f-8e44-460a-9696-5af11fc75a95","Type":"ContainerStarted","Data":"15517a6fa097ba4862bbe297166a812e25a2b04a2bf5a03da2d70d7e6ec8be27"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.514962 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.515603 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" event={"ID":"65a4be30-e857-49e0-8bef-1d24b338b5b2","Type":"ContainerStarted","Data":"af742b015e5a00d930854f0d4af7cc9150f291091f0043169341bc767eb1ad1f"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.529842 4765 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2hvlc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.529893 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.572817 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" event={"ID":"82868331-992e-4d1c-ba2c-8eabe62071f0","Type":"ContainerStarted","Data":"f2a6d61bf22a2e842252b89843cab83733355588623d530e8b57cfe579fbe489"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.573685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.574491 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.074480295 +0000 UTC m=+142.005025446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.579854 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" event={"ID":"4e5174d2-b2e5-4c8f-ad57-e08e06808fcc","Type":"ContainerStarted","Data":"c6ac99d1ba9d9f65796aa9807a966b9be519357eddb8b9856534f9ea6c19cf2d"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.616042 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" podStartSLOduration=122.616024418 podStartE2EDuration="2m2.616024418s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.530281837 +0000 UTC m=+141.460826988" watchObservedRunningTime="2025-12-03 20:40:43.616024418 +0000 UTC m=+141.546569569" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.616373 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" podStartSLOduration=122.616368228 podStartE2EDuration="2m2.616368228s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.615493522 +0000 UTC m=+141.546038673" watchObservedRunningTime="2025-12-03 20:40:43.616368228 +0000 UTC m=+141.546913369" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.633671 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-stgcm" event={"ID":"de590c28-833f-4c0b-9184-62a37519a9e0","Type":"ContainerStarted","Data":"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.635733 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" event={"ID":"bd1c3235-df64-48e7-9c08-e7ee70c8fe49","Type":"ContainerStarted","Data":"aaeff3aaf3db5b43433c2a950d60bc1e744bad878ca7ca81431f932f6596aed3"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.645578 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" event={"ID":"594c65ad-a006-4b48-946f-e6e9fe9e3f13","Type":"ContainerStarted","Data":"e151dee2049d8d4d8b05cfa96cfd5e7b7f26489b2f01f9af890b46cfc2de1f3c"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.648047 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" event={"ID":"c6d367ea-4210-4883-96bc-54987e5f6f7a","Type":"ContainerStarted","Data":"84c951e2e7a8c4b26ff13500e78912c4000a685796785c33d0390abb65d9a6de"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.655284 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" event={"ID":"54a4db3f-a64d-4b52-8d7f-cece2602c57c","Type":"ContainerStarted","Data":"5b7c616b0a1e65ae45ab04b87b302e7b3ed82dc99b6f527b919c5721667e9883"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.655343 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" event={"ID":"54a4db3f-a64d-4b52-8d7f-cece2602c57c","Type":"ContainerStarted","Data":"a29ccdc4eab8f0d7e09e45754fccd3b51cfdfb4d6541b9324d98728e865ea8c7"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.658926 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbnxq" event={"ID":"5c5998eb-7e4c-415b-9e2b-6198992d2027","Type":"ContainerStarted","Data":"6798c5400f455f92486989bdef6a9e945111931a54f15063e1f89a568a67dacb"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.675918 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.677126 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.177111096 +0000 UTC m=+142.107656247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.680713 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" event={"ID":"d04f847d-2261-48b3-9314-7b3b1cb8af38","Type":"ContainerStarted","Data":"cbe8a9ec97755297c5af200aa8e713c66aa61570341243e01bd96bc22dd476a1"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.725411 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" event={"ID":"1368046e-eb8f-4969-8698-aa8e0c72204a","Type":"ContainerStarted","Data":"7ff4a12659e2b78d6b848b3d5f3903a2bd0ae50d4fd07d9cb7d31b5027b29503"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.737484 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" event={"ID":"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f","Type":"ContainerStarted","Data":"2d6c057ce8f6e53f8c46f42bb6da390638ac14e4b07d3e7c97659f51b9216548"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.766551 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" event={"ID":"6d99dfc5-7325-42cc-b69d-97fb7afeb049","Type":"ContainerStarted","Data":"83f0ba4c0b64334cc23cc86fa4f44db010a14e8c7007dc18ce439dee46389bb2"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.783047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.784566 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.284554024 +0000 UTC m=+142.215099175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.796250 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mm9nl" podStartSLOduration=122.796230386 podStartE2EDuration="2m2.796230386s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.786956153 +0000 UTC m=+141.717501304" watchObservedRunningTime="2025-12-03 20:40:43.796230386 +0000 UTC m=+141.726775527" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.797285 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" event={"ID":"64642dc1-627c-4efd-8301-2cb6f6166e43","Type":"ContainerStarted","Data":"d396c302dd762ab711e7f36cdfe65406258cf6a51aec51f02e42a9d32df4a626"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.797479 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.805183 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kcjkk" event={"ID":"468fea60-b89b-4a56-89c5-1e1f2586301c","Type":"ContainerStarted","Data":"80effedddd8c7a2acb372b798619dd31d39e497fe4cb76ee4fe15eea12861cc3"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.829834 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.891996 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fsnt" event={"ID":"95a05559-52f4-4623-9a2d-7326ebf3d7bb","Type":"ContainerStarted","Data":"c04f842e9e354058a61dd5a73b4c812913186fd195dd5d92ebf4e16ce69cef1b"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.892653 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.892832 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.392809745 +0000 UTC m=+142.323354896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.893029 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.893387 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.393378252 +0000 UTC m=+142.323923403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.930927 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8wxvz" podStartSLOduration=122.930908459 podStartE2EDuration="2m2.930908459s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.892359442 +0000 UTC m=+141.822904613" watchObservedRunningTime="2025-12-03 20:40:43.930908459 +0000 UTC m=+141.861453610" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.945122 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" event={"ID":"d931ee2a-fefd-45cb-9cb6-4db4f3b20083","Type":"ContainerStarted","Data":"e7d38a70ea1385abda96ac1a5f44c5a1e508e06034d63e5a9572820a9663bdd4"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.962723 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qv6v8" podStartSLOduration=122.962710114 podStartE2EDuration="2m2.962710114s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.961629394 +0000 UTC m=+141.892174545" watchObservedRunningTime="2025-12-03 20:40:43.962710114 +0000 UTC m=+141.893255265" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.962991 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9fdfw" podStartSLOduration=122.962986693 podStartE2EDuration="2m2.962986693s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.933841503 +0000 UTC m=+141.864386664" watchObservedRunningTime="2025-12-03 20:40:43.962986693 +0000 UTC m=+141.893531844" Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.995707 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" event={"ID":"349a2f61-6556-4ca3-b200-30613a913e11","Type":"ContainerStarted","Data":"acc239fb70f3833ce6a648999f27cdac9dd680eaf8cafb3c8aa2bfa5f1736a06"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.995748 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" event={"ID":"349a2f61-6556-4ca3-b200-30613a913e11","Type":"ContainerStarted","Data":"ed4f9f3ba8422a0842053712b8bda1f495bf96c7390a4c59194fd15f84a60474"} Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.996391 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.996649 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.49663524 +0000 UTC m=+142.427180381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:43 crc kubenswrapper[4765]: I1203 20:40:43.996744 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:43 crc kubenswrapper[4765]: E1203 20:40:43.997000 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.49699411 +0000 UTC m=+142.427539261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.004210 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" event={"ID":"9c60f706-afe3-43f0-a8b6-f1d8003a0d82","Type":"ContainerStarted","Data":"276e98b817d21859f0822dde78dd7ea6e520b71b1c5e8776591043fff84e78ab"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.016899 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l2qhf" podStartSLOduration=123.016873876 podStartE2EDuration="2m3.016873876s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:43.991931836 +0000 UTC m=+141.922477007" watchObservedRunningTime="2025-12-03 20:40:44.016873876 +0000 UTC m=+141.947419027" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.040193 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kcjkk" podStartSLOduration=7.040158888 podStartE2EDuration="7.040158888s" podCreationTimestamp="2025-12-03 20:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.009226238 +0000 UTC m=+141.939771389" watchObservedRunningTime="2025-12-03 20:40:44.040158888 +0000 UTC m=+141.970704039" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.045383 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" event={"ID":"84594671-44e4-4db9-b215-dd0d596e7ac5","Type":"ContainerStarted","Data":"962adc6fb237712285dd91192cf900cde76fca31ff60a30ec6b35d11d1e9caf7"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.057884 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" event={"ID":"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411","Type":"ContainerStarted","Data":"50f37a9572f6cba624c540572ed0798e7e77ca546fdf040f9497658b2adb543a"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.070988 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-46h8d" event={"ID":"58118968-5018-452e-ad64-2ae028378570","Type":"ContainerStarted","Data":"c50f25c3ad48780fd5c76adcc01c8629dcef99fd87ae5d59728a0ddd7cfa56c5"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.072028 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.077833 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" event={"ID":"286ddd24-e2a1-407b-95ba-5af10398ebb0","Type":"ContainerStarted","Data":"eb323a34dfce47f169209d9eaedbde20400c905d65c1854c99c63f781f6c7877"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.080791 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dt8qz" podStartSLOduration=123.080771975 podStartE2EDuration="2m3.080771975s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.057636926 +0000 UTC m=+141.988182077" watchObservedRunningTime="2025-12-03 20:40:44.080771975 +0000 UTC m=+142.011317126" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.080880 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-stgcm" podStartSLOduration=123.080875548 podStartE2EDuration="2m3.080875548s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.077579603 +0000 UTC m=+142.008124754" watchObservedRunningTime="2025-12-03 20:40:44.080875548 +0000 UTC m=+142.011420699" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.086893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" event={"ID":"484861d9-076d-43a5-854c-aae7cf403e43","Type":"ContainerStarted","Data":"349e7ded2b0661dc8fda4448ff064b45348d7f45a3f71fd43662f24742fa10b6"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.087712 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.095119 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l6r9q" event={"ID":"e8e4fc55-d165-4961-90bf-1e6ecbdf09da","Type":"ContainerStarted","Data":"9ca5714011c762ac5b45edbcf5f686137396b087b60e702cede1f09b55615c6e"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.100546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.103407 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.603374968 +0000 UTC m=+142.533920119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.105789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" event={"ID":"1033ee94-376d-4190-8e79-ce0d34031aed","Type":"ContainerStarted","Data":"ed5478b0d7886f51dec624d9eb712df5f96571ddc0e4b15c105d702b85749603"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.109529 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vwm56" podStartSLOduration=123.109514873 podStartE2EDuration="2m3.109514873s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.109384579 +0000 UTC m=+142.039929730" watchObservedRunningTime="2025-12-03 20:40:44.109514873 +0000 UTC m=+142.040060024" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.126882 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" event={"ID":"c25824b2-7d4e-4fdd-ac80-d2975d802570","Type":"ContainerStarted","Data":"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.127276 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.131635 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-46h8d" podStartSLOduration=123.131622532 podStartE2EDuration="2m3.131622532s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.128427511 +0000 UTC m=+142.058972662" watchObservedRunningTime="2025-12-03 20:40:44.131622532 +0000 UTC m=+142.062167683" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.142705 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" event={"ID":"19034462-4553-4fe9-ba44-6c2b4c6e17d3","Type":"ContainerStarted","Data":"1ad1924529254cbfd82c90d5aeda80b5d240166dd18d9d23f3868d68880836a2"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.149348 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" podStartSLOduration=123.149272164 podStartE2EDuration="2m3.149272164s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.144154588 +0000 UTC m=+142.074699739" watchObservedRunningTime="2025-12-03 20:40:44.149272164 +0000 UTC m=+142.079817315" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.166772 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" event={"ID":"2d87ca8a-2073-4f34-8dd7-a02a348018e9","Type":"ContainerStarted","Data":"08a00b21ff74064a89cf58849ff8eb6ee35ff08327b74932425792664a97c54f"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.178776 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" podStartSLOduration=123.178761194 podStartE2EDuration="2m3.178761194s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.16211317 +0000 UTC m=+142.092658311" watchObservedRunningTime="2025-12-03 20:40:44.178761194 +0000 UTC m=+142.109306345" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.181104 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" event={"ID":"30679830-f405-44ca-9575-ff37afd13189","Type":"ContainerStarted","Data":"4aec3364662ee76fa49e881dba56ae7bc60b268abff62a394865083e20c2a5ee"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.181131 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" event={"ID":"30679830-f405-44ca-9575-ff37afd13189","Type":"ContainerStarted","Data":"aeae6c520aea8498bb37726e6af6c71620b1338b93886fec9ff2ef4b7ae10f66"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.183744 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" event={"ID":"72bcf519-7707-4f5f-b7f2-9a77cdfe292e","Type":"ContainerStarted","Data":"699dbd6bcd95b9bc3d06b74a1a176aec1fe8bc4e8a737a7807ad650303319ab7"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.188146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" event={"ID":"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec","Type":"ContainerStarted","Data":"0aa10fad4893885c8e1bb307afd96b0a9b998fd43468f6d9cb3d18194088202a"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.197625 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7jkhw" event={"ID":"ccac6268-00a4-448f-a04d-2d0aad175726","Type":"ContainerStarted","Data":"56510f3ce3a9178de1c8b6b706843099d45b71fcea024815aff249bf8e506ad2"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.198535 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.200259 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-l6r9q" podStartSLOduration=123.200237374 podStartE2EDuration="2m3.200237374s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.178034482 +0000 UTC m=+142.108579633" watchObservedRunningTime="2025-12-03 20:40:44.200237374 +0000 UTC m=+142.130782525" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.200460 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-7jkhw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.201489 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7jkhw" podUID="ccac6268-00a4-448f-a04d-2d0aad175726" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.204566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.208402 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.708385526 +0000 UTC m=+142.638930687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.217946 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" event={"ID":"ab43899d-622d-43b5-aef2-1bdc77e4b04d","Type":"ContainerStarted","Data":"1f0aed3f34420999133f2834427130c0e4f8ac3d24d5b2c37a695712f1d6e606"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.219413 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" podStartSLOduration=123.219390909 podStartE2EDuration="2m3.219390909s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.196779396 +0000 UTC m=+142.127324547" watchObservedRunningTime="2025-12-03 20:40:44.219390909 +0000 UTC m=+142.149936060" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.222398 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" event={"ID":"a8f9f0ff-9067-4555-873a-28815df1d4f6","Type":"ContainerStarted","Data":"1483e8a675d6418919d826611b906ded7776c7e944df6490282be59104fb4964"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.224674 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" event={"ID":"7fe27b06-dcc9-41a3-9768-b84fc02e378f","Type":"ContainerStarted","Data":"ffbbac0c7f11dd8296c8d507440b09375e7b5e59ecd557e2ae7467081453144b"} Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.238253 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-q9f52" podStartSLOduration=123.238231426 podStartE2EDuration="2m3.238231426s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.218113444 +0000 UTC m=+142.148658595" watchObservedRunningTime="2025-12-03 20:40:44.238231426 +0000 UTC m=+142.168776577" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.239402 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7jkhw" podStartSLOduration=123.239395499 podStartE2EDuration="2m3.239395499s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.230590948 +0000 UTC m=+142.161136109" watchObservedRunningTime="2025-12-03 20:40:44.239395499 +0000 UTC m=+142.169940640" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.247096 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nhpxz" podStartSLOduration=123.247084268 podStartE2EDuration="2m3.247084268s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.244629418 +0000 UTC m=+142.175174579" watchObservedRunningTime="2025-12-03 20:40:44.247084268 +0000 UTC m=+142.177629419" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.285274 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" podStartSLOduration=123.285257065 podStartE2EDuration="2m3.285257065s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:44.28266054 +0000 UTC m=+142.213205701" watchObservedRunningTime="2025-12-03 20:40:44.285257065 +0000 UTC m=+142.215802216" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.286917 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.286959 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.306312 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.306504 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.806483728 +0000 UTC m=+142.737028879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.306591 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.307270 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.807223899 +0000 UTC m=+142.737769050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.407776 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.408074 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:44.908058979 +0000 UTC m=+142.838604130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.508941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.509323 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.00929039 +0000 UTC m=+142.939835551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.610111 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.610492 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.11047613 +0000 UTC m=+143.041021281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.696527 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.699707 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:44 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:44 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:44 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.699764 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.711124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.711490 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.211479215 +0000 UTC m=+143.142024366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.812610 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.813101 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.313073586 +0000 UTC m=+143.243618737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.834503 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n54j6" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.920351 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:44 crc kubenswrapper[4765]: E1203 20:40:44.920848 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.420837182 +0000 UTC m=+143.351382333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.973510 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.976031 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-46h8d" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.976111 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.976509 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:44 crc kubenswrapper[4765]: I1203 20:40:44.995803 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.023828 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.039986 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.040489 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.050861 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhfq\" (UniqueName: \"kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.068165 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.568130685 +0000 UTC m=+143.498675846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.068407 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.068535 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.068871 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.568861205 +0000 UTC m=+143.499406356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.169867 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.172466 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.672438933 +0000 UTC m=+143.602984084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.197791 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.197890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.197942 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.197991 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvhfq\" (UniqueName: \"kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.198691 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.198924 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.698913977 +0000 UTC m=+143.629459128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.199125 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.239216 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvhfq\" (UniqueName: \"kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq\") pod \"community-operators-fx79t\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.298279 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" event={"ID":"594c65ad-a006-4b48-946f-e6e9fe9e3f13","Type":"ContainerStarted","Data":"b4f1b0d0c31b6e3bf87242036006edc774f1bd11bb3d2c5edd6bef29e17f607f"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.300406 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.300524 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.800507098 +0000 UTC m=+143.731052249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.300878 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.301344 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.801331591 +0000 UTC m=+143.731876742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.330290 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7bgqb" podStartSLOduration=124.330272166 podStartE2EDuration="2m4.330272166s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.329957077 +0000 UTC m=+143.260502228" watchObservedRunningTime="2025-12-03 20:40:45.330272166 +0000 UTC m=+143.260817317" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.368585 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.374093 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.375024 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.382387 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" event={"ID":"d931ee2a-fefd-45cb-9cb6-4db4f3b20083","Type":"ContainerStarted","Data":"f9b152cfdbced42e4beebc483e45150dd255793752fa21613f30169f9303941a"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.398060 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.402575 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.402960 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82l6\" (UniqueName: \"kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.403179 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.403221 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.404716 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:45.904677703 +0000 UTC m=+143.835222854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.439395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" event={"ID":"d04f847d-2261-48b3-9314-7b3b1cb8af38","Type":"ContainerStarted","Data":"f9bb7a5507580e792bbc8d8d1c1f297433d34d67ca76237ac690d10015d62bc3"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.439686 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.441585 4765 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-984s7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.441611 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.443468 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" event={"ID":"2d87ca8a-2073-4f34-8dd7-a02a348018e9","Type":"ContainerStarted","Data":"a00a57d978fc961a70c2876cb58a73a5f906b2d86f0b309da371cb79e15c75c2"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.464958 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" event={"ID":"72bcf519-7707-4f5f-b7f2-9a77cdfe292e","Type":"ContainerStarted","Data":"174dacaf014e6ebf033450964fdd9488e2e3618364c83edc69f394f8e3e3af48"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.495567 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.503907 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" podStartSLOduration=124.503885957 podStartE2EDuration="2m4.503885957s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.481719396 +0000 UTC m=+143.412264557" watchObservedRunningTime="2025-12-03 20:40:45.503885957 +0000 UTC m=+143.434431108" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.507923 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v82l6\" (UniqueName: \"kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.508050 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.508074 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.508095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.511668 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.011648987 +0000 UTC m=+143.942194138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.512662 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.513403 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.544859 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" event={"ID":"c6d367ea-4210-4883-96bc-54987e5f6f7a","Type":"ContainerStarted","Data":"424b6be4031807c74c3c047fb41f6a3646db42bce12e438e18006dec58df5441"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.547751 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qg9sx" podStartSLOduration=124.547735824 podStartE2EDuration="2m4.547735824s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.513748617 +0000 UTC m=+143.444293778" watchObservedRunningTime="2025-12-03 20:40:45.547735824 +0000 UTC m=+143.478280975" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.549150 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2bp7f" podStartSLOduration=124.549141815 podStartE2EDuration="2m4.549141815s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.548896918 +0000 UTC m=+143.479442069" watchObservedRunningTime="2025-12-03 20:40:45.549141815 +0000 UTC m=+143.479686966" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.581744 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v82l6\" (UniqueName: \"kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6\") pod \"community-operators-kf8h6\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.582422 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.583668 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.585260 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" podStartSLOduration=124.585251082 podStartE2EDuration="2m4.585251082s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.584692326 +0000 UTC m=+143.515237477" watchObservedRunningTime="2025-12-03 20:40:45.585251082 +0000 UTC m=+143.515796233" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.598115 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.605625 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.608644 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.609091 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mfr\" (UniqueName: \"kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.609141 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.609164 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.610159 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.110144421 +0000 UTC m=+144.040689572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.610635 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" event={"ID":"27f01ae7-ddd0-4a0a-9d26-e86a4e50f411","Type":"ContainerStarted","Data":"78b36ba1d8c64765bde79baa7ea5c0c40820d184b0764d395bc0cbec357eb0ff"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.655817 4765 generic.go:334] "Generic (PLEG): container finished" podID="7fe27b06-dcc9-41a3-9768-b84fc02e378f" containerID="3bfcda47b52c92523430311e3b08fec85414ad836d2e75a4a3da0c2ba6ad543a" exitCode=0 Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.655900 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" event={"ID":"7fe27b06-dcc9-41a3-9768-b84fc02e378f","Type":"ContainerDied","Data":"3bfcda47b52c92523430311e3b08fec85414ad836d2e75a4a3da0c2ba6ad543a"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.671577 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" event={"ID":"1368046e-eb8f-4969-8698-aa8e0c72204a","Type":"ContainerStarted","Data":"da0b5fc694eca68043a8921fcd8cedba7c514dd9040a921b7ac56b80d1a7b047"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.699023 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" event={"ID":"f4e540f5-00a4-4e01-aa34-fb3a4c249677","Type":"ContainerStarted","Data":"3db1d66a3b29d1ee8568b6e017b5e0efe4096824be9670c52753c548fb554850"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.700822 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" podStartSLOduration=124.700811821 podStartE2EDuration="2m4.700811821s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.697908228 +0000 UTC m=+143.628453379" watchObservedRunningTime="2025-12-03 20:40:45.700811821 +0000 UTC m=+143.631356972" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.712476 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:45 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:45 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:45 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.712529 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.714118 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.714148 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5mfr\" (UniqueName: \"kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.714200 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.714222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.715257 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.715899 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.716769 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.216754785 +0000 UTC m=+144.147299936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.726437 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" event={"ID":"9c60f706-afe3-43f0-a8b6-f1d8003a0d82","Type":"ContainerStarted","Data":"ca3ea4a83c88b41c06098f479789f3f25a8924f8b3281375dd29116cd292f4f9"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.739967 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.740874 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbnxq" event={"ID":"5c5998eb-7e4c-415b-9e2b-6198992d2027","Type":"ContainerStarted","Data":"94fc2c677fc49449bd7941fb0f0b93883d62173a18319660eb249c0bb399cf55"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.755783 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.756640 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.763820 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.768995 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fsnt" event={"ID":"95a05559-52f4-4623-9a2d-7326ebf3d7bb","Type":"ContainerStarted","Data":"c3a637c06c27795df1e28d001a2d30fef516fc6ff5a86a2aa393cca86ea02b87"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.820984 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.824472 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.324444839 +0000 UTC m=+144.254989990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.827362 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" event={"ID":"bd21beb3-07f3-450b-9c58-8edc8ef9b9ec","Type":"ContainerStarted","Data":"e799fc930e2c367c71e7a1324b206ec873ebc5ddb14570a64a1bedb968dccd25"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.834393 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t2x88" podStartSLOduration=124.834365952 podStartE2EDuration="2m4.834365952s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.825933002 +0000 UTC m=+143.756478153" watchObservedRunningTime="2025-12-03 20:40:45.834365952 +0000 UTC m=+143.764911103" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.835381 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c5s6t" podStartSLOduration=124.835374271 podStartE2EDuration="2m4.835374271s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.792610504 +0000 UTC m=+143.723155665" watchObservedRunningTime="2025-12-03 20:40:45.835374271 +0000 UTC m=+143.765919422" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.838861 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" event={"ID":"6fce1062-dbe7-42f2-a519-d0bc96d9c16d","Type":"ContainerStarted","Data":"7ed9b3175b44ee87406d49e929375993a193f72437eeda51b50ff7fe213af935"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.840142 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5mfr\" (UniqueName: \"kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr\") pod \"certified-operators-pw7wt\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.851610 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2bvfx" event={"ID":"19034462-4553-4fe9-ba44-6c2b4c6e17d3","Type":"ContainerStarted","Data":"41c7c307d854316d2e0a5c75ee82489ccbe7e22e89d7051747a3b55912af6a54"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.878937 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zqbfv" podStartSLOduration=124.878923419 podStartE2EDuration="2m4.878923419s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.878371654 +0000 UTC m=+143.808916815" watchObservedRunningTime="2025-12-03 20:40:45.878923419 +0000 UTC m=+143.809468570" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.917406 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" event={"ID":"1033ee94-376d-4190-8e79-ce0d34031aed","Type":"ContainerStarted","Data":"f645a945a1c4d62b5f4bd78220bddee1b5de322b61d2d36d578db793a297ce2d"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.917970 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2sm67" podStartSLOduration=124.91795442 podStartE2EDuration="2m4.91795442s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.916674434 +0000 UTC m=+143.847219585" watchObservedRunningTime="2025-12-03 20:40:45.91795442 +0000 UTC m=+143.848499571" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.928695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.928802 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.928826 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.928842 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hbn\" (UniqueName: \"kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:45 crc kubenswrapper[4765]: E1203 20:40:45.930162 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.430138878 +0000 UTC m=+144.360684049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.944365 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vd9qv" podStartSLOduration=124.944349722 podStartE2EDuration="2m4.944349722s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.943811847 +0000 UTC m=+143.874356998" watchObservedRunningTime="2025-12-03 20:40:45.944349722 +0000 UTC m=+143.874894873" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.951099 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" event={"ID":"8ddf1877-d8b6-44e0-8b1d-45cb0e780b1f","Type":"ContainerStarted","Data":"c75e0a36939bffaf46534180a7e49b1909b6a1e1688d817b9fe1df018ca5aec4"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.967133 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" event={"ID":"65a4be30-e857-49e0-8bef-1d24b338b5b2","Type":"ContainerStarted","Data":"0c94b890ff8945524560bfdf10341348af5bbbd00a0d3a526454181db3f7222a"} Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.967172 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.968122 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-7jkhw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.968169 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7jkhw" podUID="ccac6268-00a4-448f-a04d-2d0aad175726" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.970828 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.981777 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lbnxq" podStartSLOduration=8.981757716 podStartE2EDuration="8.981757716s" podCreationTimestamp="2025-12-03 20:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:45.971568337 +0000 UTC m=+143.902113478" watchObservedRunningTime="2025-12-03 20:40:45.981757716 +0000 UTC m=+143.912302867" Dec 03 20:40:45 crc kubenswrapper[4765]: I1203 20:40:45.989581 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6g5wl" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.028219 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" podStartSLOduration=125.028203759 podStartE2EDuration="2m5.028203759s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:46.026803778 +0000 UTC m=+143.957348929" watchObservedRunningTime="2025-12-03 20:40:46.028203759 +0000 UTC m=+143.958748900" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.032107 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.032429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.032848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.033888 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.53387293 +0000 UTC m=+144.464418081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.036464 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.036554 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.036591 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hbn\" (UniqueName: \"kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.056064 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.556049471 +0000 UTC m=+144.486594622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.060037 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.086097 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hbn\" (UniqueName: \"kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn\") pod \"certified-operators-ft2xt\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.137802 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.138483 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.638469247 +0000 UTC m=+144.569014398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.147872 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vslcz" podStartSLOduration=125.147856393 podStartE2EDuration="2m5.147856393s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:46.102260866 +0000 UTC m=+144.032806017" watchObservedRunningTime="2025-12-03 20:40:46.147856393 +0000 UTC m=+144.078401544" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.147967 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vj5h7" podStartSLOduration=125.147962487 podStartE2EDuration="2m5.147962487s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:46.146711091 +0000 UTC m=+144.077256242" watchObservedRunningTime="2025-12-03 20:40:46.147962487 +0000 UTC m=+144.078507638" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.148652 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.241144 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.241905 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.7418907 +0000 UTC m=+144.672435851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.345109 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.345413 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.845390066 +0000 UTC m=+144.775935217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.345563 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.345844 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.845835048 +0000 UTC m=+144.776380199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.450249 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.450657 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:46.950642071 +0000 UTC m=+144.881187222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.553169 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.553512 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.053501068 +0000 UTC m=+144.984046219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.653974 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.654207 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.154184724 +0000 UTC m=+145.084729865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.654291 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.654686 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.154676818 +0000 UTC m=+145.085221959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.700518 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:46 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:46 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:46 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.700875 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.739649 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.755070 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.755534 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.255519928 +0000 UTC m=+145.186065079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.852448 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.857281 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.857626 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.357615184 +0000 UTC m=+145.288160335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.951127 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.958990 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:46 crc kubenswrapper[4765]: E1203 20:40:46.959641 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.459617076 +0000 UTC m=+145.390162237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.971032 4765 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-645jw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.971088 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" podUID="b3d1ca02-bcba-4523-8713-12443cebf75d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.976163 4765 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2hvlc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 20:40:46 crc kubenswrapper[4765]: I1203 20:40:46.976222 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.043909 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" event={"ID":"84594671-44e4-4db9-b215-dd0d596e7ac5","Type":"ContainerStarted","Data":"33ac33bd8b63759e61abbf64fd47fa43b24837e6efc81567d4ba81e8c044f4e9"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.059731 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" event={"ID":"7fe27b06-dcc9-41a3-9768-b84fc02e378f","Type":"ContainerStarted","Data":"476c9af5f94c08ede4353bc5c8c3b2369a60df2083ab939464840e88c12ae562"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.059868 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.060484 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.060855 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.560843797 +0000 UTC m=+145.491388948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.092022 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" event={"ID":"6fce1062-dbe7-42f2-a519-d0bc96d9c16d","Type":"ContainerStarted","Data":"8951bb3550115cef1ee34850f87743d6b3e7a67909d8f5d106698a04df561f7e"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.108159 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5c4vp" podStartSLOduration=126.108138503 podStartE2EDuration="2m6.108138503s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:47.084433149 +0000 UTC m=+145.014978300" watchObservedRunningTime="2025-12-03 20:40:47.108138503 +0000 UTC m=+145.038683654" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.120209 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" podStartSLOduration=126.120184197 podStartE2EDuration="2m6.120184197s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:47.102454301 +0000 UTC m=+145.032999452" watchObservedRunningTime="2025-12-03 20:40:47.120184197 +0000 UTC m=+145.050729348" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.123584 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fsnt" event={"ID":"95a05559-52f4-4623-9a2d-7326ebf3d7bb","Type":"ContainerStarted","Data":"30e792ae0ded75b0e42fdb92651328b7e05ef89af2a3bd931d6889db8da167d0"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.124200 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4fsnt" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.140234 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqrd4" podStartSLOduration=126.140179596 podStartE2EDuration="2m6.140179596s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:47.138450826 +0000 UTC m=+145.068995977" watchObservedRunningTime="2025-12-03 20:40:47.140179596 +0000 UTC m=+145.070724747" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.161564 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.161964 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.162974 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.662958084 +0000 UTC m=+145.593503235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.169094 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" event={"ID":"286ddd24-e2a1-407b-95ba-5af10398ebb0","Type":"ContainerStarted","Data":"97ae77b6cf43a1436213a7587adb224aa5068dd0e4034109d70a7ee11094e4f3"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.188366 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" event={"ID":"65a4be30-e857-49e0-8bef-1d24b338b5b2","Type":"ContainerStarted","Data":"777dffd981633e093de673101e6bd3f12496a06408a3be1c3381328df3e78c56"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.213523 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerStarted","Data":"9c7ef6b2b6491c42d7ba7fc0aa92c3ae916440bbfe5113ed9bd99c7cae1b5bd4"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.226215 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerStarted","Data":"dd1d9dc4905b588dd32ab8e6f659e5b14ef9d94ac87e00df6adab4f53bd0808f"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.251546 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerStarted","Data":"786dd190762b6899f0df5a5eb3e0ac0ff6559865e0ba1915bcec0e6213ce2e68"} Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.252924 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-7jkhw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.252980 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7jkhw" podUID="ccac6268-00a4-448f-a04d-2d0aad175726" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.258969 4765 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-984s7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.259178 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.264094 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.264539 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.764526784 +0000 UTC m=+145.695071935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.280106 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-645jw" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.323233 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4fsnt" podStartSLOduration=10.323216434 podStartE2EDuration="10.323216434s" podCreationTimestamp="2025-12-03 20:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:47.175292004 +0000 UTC m=+145.105837275" watchObservedRunningTime="2025-12-03 20:40:47.323216434 +0000 UTC m=+145.253761585" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.363092 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.364104 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.364667 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.364839 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.864823449 +0000 UTC m=+145.795368600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.365342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.370769 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.383369 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.883340335 +0000 UTC m=+145.813885556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.392627 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.467237 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.467647 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw978\" (UniqueName: \"kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.467685 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.467745 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.467957 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:47.967942214 +0000 UTC m=+145.898487365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.569658 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw978\" (UniqueName: \"kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.569723 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.569795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.569830 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.570168 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.070156812 +0000 UTC m=+146.000701963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.570843 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.571065 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.593563 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw978\" (UniqueName: \"kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978\") pod \"redhat-marketplace-bhtjn\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.671193 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.671428 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.171394943 +0000 UTC m=+146.101940104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.671771 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.672125 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.172117225 +0000 UTC m=+146.102662376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.697577 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:47 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:47 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:47 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.697855 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.752340 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.758339 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.762935 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.764040 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.767280 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.786708 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.788213 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.288184888 +0000 UTC m=+146.218730049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.888835 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.889195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.889257 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.889343 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djtz\" (UniqueName: \"kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.889616 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.389604994 +0000 UTC m=+146.320150145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.958614 4765 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.990048 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.990291 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djtz\" (UniqueName: \"kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.990358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.990415 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.490390942 +0000 UTC m=+146.420936093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.990497 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.990569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:47 crc kubenswrapper[4765]: E1203 20:40:47.990911 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.490904337 +0000 UTC m=+146.421449488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.991024 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:47 crc kubenswrapper[4765]: I1203 20:40:47.991318 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.008109 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djtz\" (UniqueName: \"kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz\") pod \"redhat-marketplace-945qj\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.086695 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.091845 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:48 crc kubenswrapper[4765]: E1203 20:40:48.091999 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.591978603 +0000 UTC m=+146.522523754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.092198 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:48 crc kubenswrapper[4765]: E1203 20:40:48.092498 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.592490849 +0000 UTC m=+146.523035990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.098798 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.193618 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:48 crc kubenswrapper[4765]: E1203 20:40:48.194015 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.694000257 +0000 UTC m=+146.624545408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.259639 4765 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T20:40:47.958642069Z","Handler":null,"Name":""} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.264328 4765 generic.go:334] "Generic (PLEG): container finished" podID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerID="07e7f96603f33a013cedcfdf71cd6e251c85f1ffd8e1ee987bba79da8fa0c472" exitCode=0 Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.264376 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerDied","Data":"07e7f96603f33a013cedcfdf71cd6e251c85f1ffd8e1ee987bba79da8fa0c472"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.266134 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.266950 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerStarted","Data":"dbb0128bd11af427c67e6d3dd67a9899dc0e30b8f147aabab7799551d5243b8d"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.274566 4765 generic.go:334] "Generic (PLEG): container finished" podID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerID="5fc8569ec182a279b33bcbe91748d3b86b92593f423695fbcd0f9cc05831259c" exitCode=0 Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.274645 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerDied","Data":"5fc8569ec182a279b33bcbe91748d3b86b92593f423695fbcd0f9cc05831259c"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.278163 4765 generic.go:334] "Generic (PLEG): container finished" podID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerID="89fa27c4912856d8e04d4b8b4147007ac88fc8853dc1ee2c92be02702bfdf5b3" exitCode=0 Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.278231 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerDied","Data":"89fa27c4912856d8e04d4b8b4147007ac88fc8853dc1ee2c92be02702bfdf5b3"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.281068 4765 generic.go:334] "Generic (PLEG): container finished" podID="c036902a-7c68-473e-966f-c5d36930fbaf" containerID="aaa7f0decb0eb6caee923fe9aecfd1616ad07a648f74874866781e765a0109a1" exitCode=0 Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.282661 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerDied","Data":"aaa7f0decb0eb6caee923fe9aecfd1616ad07a648f74874866781e765a0109a1"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.282693 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerStarted","Data":"18fae812b8162c12f3fe2225a45bfa9f5ed9fcf1ab71c1d65b92db3120848455"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.295352 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.295383 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.295414 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.295468 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.296836 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" event={"ID":"286ddd24-e2a1-407b-95ba-5af10398ebb0","Type":"ContainerStarted","Data":"33e3fa27e05deabc7c9d8e33765874379aa6bb2cb961a54a7b2cb30454c874c8"} Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.296885 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" event={"ID":"286ddd24-e2a1-407b-95ba-5af10398ebb0","Type":"ContainerStarted","Data":"a42b9420faeb6bb87dd789695999c1aba1fdee4c59c39884a2b8975f721cdfe3"} Dec 03 20:40:48 crc kubenswrapper[4765]: E1203 20:40:48.297032 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 20:40:48.797017219 +0000 UTC m=+146.727562370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mj2gq" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.312310 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.314954 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.331952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.333579 4765 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.333620 4765 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.336982 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.358449 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.365462 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.371718 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.402952 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.403437 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.417110 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.420340 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.421329 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.488671 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.504469 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.504799 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.504841 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9j5\" (UniqueName: \"kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.504880 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.504901 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.513689 4765 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.513729 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.518558 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.606286 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.606362 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9j5\" (UniqueName: \"kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.606419 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.606796 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.606855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.620649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9j5\" (UniqueName: \"kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5\") pod \"redhat-operators-s6dqc\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.699056 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:48 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:48 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:48 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.699114 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.740651 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.740952 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.746481 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.770917 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.788115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mj2gq\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.910559 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.910880 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rnzk\" (UniqueName: \"kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:48 crc kubenswrapper[4765]: I1203 20:40:48.910933 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.012499 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.012590 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rnzk\" (UniqueName: \"kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.012666 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.013476 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.013539 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.041757 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rnzk\" (UniqueName: \"kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk\") pod \"redhat-operators-f7jqz\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.060993 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.080614 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:49 crc kubenswrapper[4765]: W1203 20:40:49.190497 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1ca4f9675452ef8b701e10f1e49106e6c3dfaaf70a843c8a4a2093e92de2da25 WatchSource:0}: Error finding container 1ca4f9675452ef8b701e10f1e49106e6c3dfaaf70a843c8a4a2093e92de2da25: Status 404 returned error can't find the container with id 1ca4f9675452ef8b701e10f1e49106e6c3dfaaf70a843c8a4a2093e92de2da25 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.284414 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.284699 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.304287 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.304714 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.311486 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1ca4f9675452ef8b701e10f1e49106e6c3dfaaf70a843c8a4a2093e92de2da25"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.314837 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ba17bc6767566054e56d27858a490c922295e013a766164aea9d56c45d1164e0"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.314863 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ef950eb61ff49927939da0af95e531b66c339a7eb9edae2435f23da1557e4aa"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.321130 4765 generic.go:334] "Generic (PLEG): container finished" podID="c6d367ea-4210-4883-96bc-54987e5f6f7a" containerID="424b6be4031807c74c3c047fb41f6a3646db42bce12e438e18006dec58df5441" exitCode=0 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.321203 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" event={"ID":"c6d367ea-4210-4883-96bc-54987e5f6f7a","Type":"ContainerDied","Data":"424b6be4031807c74c3c047fb41f6a3646db42bce12e438e18006dec58df5441"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.325801 4765 generic.go:334] "Generic (PLEG): container finished" podID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerID="f1538fb9eeb1285f697489f2413098d7e49d93dafc31635893ccaf08aab97f6d" exitCode=0 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.326385 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerDied","Data":"f1538fb9eeb1285f697489f2413098d7e49d93dafc31635893ccaf08aab97f6d"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.326412 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerStarted","Data":"bfb1f87f6ed117b265abc48ef6ff3f9585527a7857b717a894a25b55f3bb4271"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.328161 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.338377 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" event={"ID":"286ddd24-e2a1-407b-95ba-5af10398ebb0","Type":"ContainerStarted","Data":"c81d8ad14ff39c6283736ddb817ceb1a7c46ca69a5a912c4ffcaa25cc7043c30"} Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.347406 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerID="0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340" exitCode=0 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.347895 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerDied","Data":"0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340"} Dec 03 20:40:49 crc kubenswrapper[4765]: W1203 20:40:49.348700 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3836bfda_f858_413a_b552_af4e679e5d77.slice/crio-24c015497dff49ecdf7090e9d612c1cab441ab75c9d03d6129815247b97910f9 WatchSource:0}: Error finding container 24c015497dff49ecdf7090e9d612c1cab441ab75c9d03d6129815247b97910f9: Status 404 returned error can't find the container with id 24c015497dff49ecdf7090e9d612c1cab441ab75c9d03d6129815247b97910f9 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.352985 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qkvvm" Dec 03 20:40:49 crc kubenswrapper[4765]: W1203 20:40:49.359255 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-5ea33d42e5259181c94f04af9ca3743e2330f759cf610ecf2897d4297297fd93 WatchSource:0}: Error finding container 5ea33d42e5259181c94f04af9ca3743e2330f759cf610ecf2897d4297297fd93: Status 404 returned error can't find the container with id 5ea33d42e5259181c94f04af9ca3743e2330f759cf610ecf2897d4297297fd93 Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.455130 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4jqxz" podStartSLOduration=12.455113029 podStartE2EDuration="12.455113029s" podCreationTimestamp="2025-12-03 20:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:49.433968216 +0000 UTC m=+147.364513367" watchObservedRunningTime="2025-12-03 20:40:49.455113029 +0000 UTC m=+147.385658180" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.626281 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:40:49 crc kubenswrapper[4765]: W1203 20:40:49.635001 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad8c5639_241d_47bb_8228_d08219c7c882.slice/crio-b476781554918df824164e065f2e25ce165febc386ddf9767cf9384400e8edfb WatchSource:0}: Error finding container b476781554918df824164e065f2e25ce165febc386ddf9767cf9384400e8edfb: Status 404 returned error can't find the container with id b476781554918df824164e065f2e25ce165febc386ddf9767cf9384400e8edfb Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.698329 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:49 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:49 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:49 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.698390 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:49 crc kubenswrapper[4765]: I1203 20:40:49.840523 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dp9jm" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.192897 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.194412 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.200950 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.201109 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.208293 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.349210 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.349250 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.358226 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b346bdb242e638e0ccc38832378c85ddf5707037de1c29ec0d5c6532a91041c9"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.358305 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5ea33d42e5259181c94f04af9ca3743e2330f759cf610ecf2897d4297297fd93"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.361373 4765 generic.go:334] "Generic (PLEG): container finished" podID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerID="fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f" exitCode=0 Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.378897 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.379697 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.379723 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerDied","Data":"fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.379763 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerStarted","Data":"d606e26286525544feb9817d383b5ed874fd5cc6bdc9fc36c993e3f765a7323b"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.379772 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" event={"ID":"ad8c5639-241d-47bb-8228-d08219c7c882","Type":"ContainerStarted","Data":"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.379780 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" event={"ID":"ad8c5639-241d-47bb-8228-d08219c7c882","Type":"ContainerStarted","Data":"b476781554918df824164e065f2e25ce165febc386ddf9767cf9384400e8edfb"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.388445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"58c6c86eb38f1312269613a940f5d11f303cfb4d2110d678d229890f67391849"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.389796 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.393097 4765 generic.go:334] "Generic (PLEG): container finished" podID="3836bfda-f858-413a-b552-af4e679e5d77" containerID="ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59" exitCode=0 Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.393382 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerDied","Data":"ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.393427 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerStarted","Data":"24c015497dff49ecdf7090e9d612c1cab441ab75c9d03d6129815247b97910f9"} Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.413401 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" podStartSLOduration=129.413380801 podStartE2EDuration="2m9.413380801s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:50.409176502 +0000 UTC m=+148.339721673" watchObservedRunningTime="2025-12-03 20:40:50.413380801 +0000 UTC m=+148.343925952" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.453242 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.453324 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.454362 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.492894 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.511506 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.656925 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.656971 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.660925 4765 patch_prober.go:28] interesting pod/console-f9d7485db-stgcm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.660965 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-stgcm" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.694590 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.697470 4765 patch_prober.go:28] interesting pod/router-default-5444994796-l6r9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 20:40:50 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Dec 03 20:40:50 crc kubenswrapper[4765]: [+]process-running ok Dec 03 20:40:50 crc kubenswrapper[4765]: healthz check failed Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.697537 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l6r9q" podUID="e8e4fc55-d165-4961-90bf-1e6ecbdf09da" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.719533 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-7jkhw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.719649 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7jkhw" podUID="ccac6268-00a4-448f-a04d-2d0aad175726" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.723251 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-7jkhw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.723508 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7jkhw" podUID="ccac6268-00a4-448f-a04d-2d0aad175726" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.727514 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.854552 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.860822 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume\") pod \"c6d367ea-4210-4883-96bc-54987e5f6f7a\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.860921 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume\") pod \"c6d367ea-4210-4883-96bc-54987e5f6f7a\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.861013 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v689\" (UniqueName: \"kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689\") pod \"c6d367ea-4210-4883-96bc-54987e5f6f7a\" (UID: \"c6d367ea-4210-4883-96bc-54987e5f6f7a\") " Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.861741 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c6d367ea-4210-4883-96bc-54987e5f6f7a" (UID: "c6d367ea-4210-4883-96bc-54987e5f6f7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.880003 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689" (OuterVolumeSpecName: "kube-api-access-9v689") pod "c6d367ea-4210-4883-96bc-54987e5f6f7a" (UID: "c6d367ea-4210-4883-96bc-54987e5f6f7a"). InnerVolumeSpecName "kube-api-access-9v689". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.880073 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c6d367ea-4210-4883-96bc-54987e5f6f7a" (UID: "c6d367ea-4210-4883-96bc-54987e5f6f7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.962398 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v689\" (UniqueName: \"kubernetes.io/projected/c6d367ea-4210-4883-96bc-54987e5f6f7a-kube-api-access-9v689\") on node \"crc\" DevicePath \"\"" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.962429 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d367ea-4210-4883-96bc-54987e5f6f7a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 20:40:50 crc kubenswrapper[4765]: I1203 20:40:50.962441 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6d367ea-4210-4883-96bc-54987e5f6f7a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.013535 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.426230 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"53ef0ea8-6848-4a3c-b224-cc970aed9399","Type":"ContainerStarted","Data":"b665fcb63a82c129e8f08563ca269402cb8afbed0defb39c7331e14d697f5923"} Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.428744 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.428749 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs" event={"ID":"c6d367ea-4210-4883-96bc-54987e5f6f7a","Type":"ContainerDied","Data":"84c951e2e7a8c4b26ff13500e78912c4000a685796785c33d0390abb65d9a6de"} Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.428797 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c951e2e7a8c4b26ff13500e78912c4000a685796785c33d0390abb65d9a6de" Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.699549 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:51 crc kubenswrapper[4765]: I1203 20:40:51.704110 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-l6r9q" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.255976 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 20:40:52 crc kubenswrapper[4765]: E1203 20:40:52.257138 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d367ea-4210-4883-96bc-54987e5f6f7a" containerName="collect-profiles" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.257219 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d367ea-4210-4883-96bc-54987e5f6f7a" containerName="collect-profiles" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.257419 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d367ea-4210-4883-96bc-54987e5f6f7a" containerName="collect-profiles" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.257931 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.263803 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.264573 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.266328 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.395840 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.395915 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.476635 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"53ef0ea8-6848-4a3c-b224-cc970aed9399","Type":"ContainerStarted","Data":"30fe65613dd72e8a4ef62661c1782efcfd6dc443fc1ec8d67463ac2252756b3b"} Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.496941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.497011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.497089 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.515867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.598006 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.885832 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.885809037 podStartE2EDuration="2.885809037s" podCreationTimestamp="2025-12-03 20:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:52.459652218 +0000 UTC m=+150.390197369" watchObservedRunningTime="2025-12-03 20:40:52.885809037 +0000 UTC m=+150.816354198" Dec 03 20:40:52 crc kubenswrapper[4765]: I1203 20:40:52.889107 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 20:40:52 crc kubenswrapper[4765]: W1203 20:40:52.917753 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcdd282e4_ce1a_4192_9186_132619a44239.slice/crio-846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692 WatchSource:0}: Error finding container 846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692: Status 404 returned error can't find the container with id 846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692 Dec 03 20:40:53 crc kubenswrapper[4765]: I1203 20:40:53.452746 4765 generic.go:334] "Generic (PLEG): container finished" podID="53ef0ea8-6848-4a3c-b224-cc970aed9399" containerID="30fe65613dd72e8a4ef62661c1782efcfd6dc443fc1ec8d67463ac2252756b3b" exitCode=0 Dec 03 20:40:53 crc kubenswrapper[4765]: I1203 20:40:53.453024 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"53ef0ea8-6848-4a3c-b224-cc970aed9399","Type":"ContainerDied","Data":"30fe65613dd72e8a4ef62661c1782efcfd6dc443fc1ec8d67463ac2252756b3b"} Dec 03 20:40:53 crc kubenswrapper[4765]: I1203 20:40:53.455433 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd282e4-ce1a-4192-9186-132619a44239","Type":"ContainerStarted","Data":"846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692"} Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.475393 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd282e4-ce1a-4192-9186-132619a44239","Type":"ContainerStarted","Data":"bdea73ae914cede7d35a9fbff51ccc25d52886c5dc9ffc389f6da06e14fd5397"} Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.488339 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.4883189740000002 podStartE2EDuration="2.488318974s" podCreationTimestamp="2025-12-03 20:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:40:54.486938574 +0000 UTC m=+152.417483745" watchObservedRunningTime="2025-12-03 20:40:54.488318974 +0000 UTC m=+152.418864135" Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.798249 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.798313 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.838342 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.935225 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir\") pod \"53ef0ea8-6848-4a3c-b224-cc970aed9399\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.935342 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access\") pod \"53ef0ea8-6848-4a3c-b224-cc970aed9399\" (UID: \"53ef0ea8-6848-4a3c-b224-cc970aed9399\") " Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.935365 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "53ef0ea8-6848-4a3c-b224-cc970aed9399" (UID: "53ef0ea8-6848-4a3c-b224-cc970aed9399"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.935605 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef0ea8-6848-4a3c-b224-cc970aed9399-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:40:54 crc kubenswrapper[4765]: I1203 20:40:54.943057 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "53ef0ea8-6848-4a3c-b224-cc970aed9399" (UID: "53ef0ea8-6848-4a3c-b224-cc970aed9399"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.037109 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ef0ea8-6848-4a3c-b224-cc970aed9399-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.488768 4765 generic.go:334] "Generic (PLEG): container finished" podID="cdd282e4-ce1a-4192-9186-132619a44239" containerID="bdea73ae914cede7d35a9fbff51ccc25d52886c5dc9ffc389f6da06e14fd5397" exitCode=0 Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.488847 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd282e4-ce1a-4192-9186-132619a44239","Type":"ContainerDied","Data":"bdea73ae914cede7d35a9fbff51ccc25d52886c5dc9ffc389f6da06e14fd5397"} Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.509268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"53ef0ea8-6848-4a3c-b224-cc970aed9399","Type":"ContainerDied","Data":"b665fcb63a82c129e8f08563ca269402cb8afbed0defb39c7331e14d697f5923"} Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.509321 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b665fcb63a82c129e8f08563ca269402cb8afbed0defb39c7331e14d697f5923" Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.509387 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 20:40:55 crc kubenswrapper[4765]: I1203 20:40:55.876448 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4fsnt" Dec 03 20:41:00 crc kubenswrapper[4765]: I1203 20:41:00.683746 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:41:00 crc kubenswrapper[4765]: I1203 20:41:00.687922 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:41:00 crc kubenswrapper[4765]: I1203 20:41:00.742710 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7jkhw" Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.800353 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.960913 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir\") pod \"cdd282e4-ce1a-4192-9186-132619a44239\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.961017 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access\") pod \"cdd282e4-ce1a-4192-9186-132619a44239\" (UID: \"cdd282e4-ce1a-4192-9186-132619a44239\") " Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.961043 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cdd282e4-ce1a-4192-9186-132619a44239" (UID: "cdd282e4-ce1a-4192-9186-132619a44239"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.961488 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd282e4-ce1a-4192-9186-132619a44239-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:41:01 crc kubenswrapper[4765]: I1203 20:41:01.965746 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cdd282e4-ce1a-4192-9186-132619a44239" (UID: "cdd282e4-ce1a-4192-9186-132619a44239"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:41:02 crc kubenswrapper[4765]: I1203 20:41:02.062682 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd282e4-ce1a-4192-9186-132619a44239-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:41:02 crc kubenswrapper[4765]: I1203 20:41:02.554585 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd282e4-ce1a-4192-9186-132619a44239","Type":"ContainerDied","Data":"846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692"} Dec 03 20:41:02 crc kubenswrapper[4765]: I1203 20:41:02.554634 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846d3e7aaec4d16ac393e8dc88b01424b7c2b3bd8a4d5f44d2ca66f43017d692" Dec 03 20:41:02 crc kubenswrapper[4765]: I1203 20:41:02.554692 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 20:41:04 crc kubenswrapper[4765]: I1203 20:41:04.090236 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:41:04 crc kubenswrapper[4765]: I1203 20:41:04.095130 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2670be8-9fe5-4210-ba7f-9538bbea79b8-metrics-certs\") pod \"network-metrics-daemon-9bhn8\" (UID: \"d2670be8-9fe5-4210-ba7f-9538bbea79b8\") " pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:41:04 crc kubenswrapper[4765]: I1203 20:41:04.132017 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9bhn8" Dec 03 20:41:09 crc kubenswrapper[4765]: I1203 20:41:09.089048 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:41:20 crc kubenswrapper[4765]: I1203 20:41:20.782091 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c2bj5" Dec 03 20:41:24 crc kubenswrapper[4765]: I1203 20:41:24.799097 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:41:24 crc kubenswrapper[4765]: I1203 20:41:24.799951 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.247160 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 20:41:28 crc kubenswrapper[4765]: E1203 20:41:28.247404 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ef0ea8-6848-4a3c-b224-cc970aed9399" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.247418 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ef0ea8-6848-4a3c-b224-cc970aed9399" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: E1203 20:41:28.247443 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd282e4-ce1a-4192-9186-132619a44239" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.247451 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd282e4-ce1a-4192-9186-132619a44239" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.247552 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ef0ea8-6848-4a3c-b224-cc970aed9399" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.247571 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd282e4-ce1a-4192-9186-132619a44239" containerName="pruner" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.248007 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.252938 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.253081 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.256547 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.330012 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.330217 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.431769 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.431890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.431973 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.452566 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.581603 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:28 crc kubenswrapper[4765]: I1203 20:41:28.736405 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 20:41:32 crc kubenswrapper[4765]: E1203 20:41:32.567964 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 20:41:32 crc kubenswrapper[4765]: E1203 20:41:32.568457 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97hbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ft2xt_openshift-marketplace(c036902a-7c68-473e-966f-c5d36930fbaf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:32 crc kubenswrapper[4765]: E1203 20:41:32.569916 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ft2xt" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.647874 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.648780 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.652581 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.708161 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.708211 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.708252 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.809401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.809457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.809500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.809585 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.809587 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.838945 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:33 crc kubenswrapper[4765]: I1203 20:41:33.977491 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.072046 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.072315 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5mfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pw7wt_openshift-marketplace(a6cc66c6-bb08-4543-bc9b-0f59d5a893dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.073530 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pw7wt" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.775724 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ft2xt" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.893450 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.893776 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v82l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kf8h6_openshift-marketplace(1c798c04-ea6b-4b91-8ad7-f42df24d0558): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:36 crc kubenswrapper[4765]: E1203 20:41:36.895493 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kf8h6" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" Dec 03 20:41:38 crc kubenswrapper[4765]: E1203 20:41:38.283715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pw7wt" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" Dec 03 20:41:38 crc kubenswrapper[4765]: E1203 20:41:38.284483 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kf8h6" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" Dec 03 20:41:38 crc kubenswrapper[4765]: E1203 20:41:38.371262 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 20:41:38 crc kubenswrapper[4765]: E1203 20:41:38.371416 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7djtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-945qj_openshift-marketplace(b8b3270d-0399-439e-b1bc-7d1628092bbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:38 crc kubenswrapper[4765]: E1203 20:41:38.372753 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-945qj" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.113822 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-945qj" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.212518 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.212880 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvhfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fx79t_openshift-marketplace(e9b28d97-921e-45dd-bb19-ff02939e1bf7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.214166 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fx79t" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.215252 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.215424 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s6dqc_openshift-marketplace(3836bfda-f858-413a-b552-af4e679e5d77): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.216531 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s6dqc" podUID="3836bfda-f858-413a-b552-af4e679e5d77" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.251524 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.251678 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw978,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bhtjn_openshift-marketplace(cf4c5db7-97af-4db6-8f56-875db60da71b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.253065 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bhtjn" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.271675 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.271839 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rnzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f7jqz_openshift-marketplace(f3f9dcd4-a267-4a22-9cf4-6caa549e30d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.273001 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f7jqz" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.347884 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.400845 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.409707 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9bhn8"] Dec 03 20:41:41 crc kubenswrapper[4765]: W1203 20:41:41.429961 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcae3ea96_ea38_44d1_818d_3cf8426dcf7b.slice/crio-a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de WatchSource:0}: Error finding container a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de: Status 404 returned error can't find the container with id a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.824220 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac77c762-18bd-4150-8829-a1a3c85759df","Type":"ContainerStarted","Data":"bbbd85693a7af5c2a74c6287b2892ae6a95fea2a17aa3845ce1ff49c5ecc0db1"} Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.824646 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac77c762-18bd-4150-8829-a1a3c85759df","Type":"ContainerStarted","Data":"868d4e0da4092f324a251cd2b26a456232af43eb459fd88cae5f9e416cb38fef"} Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.829926 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" event={"ID":"d2670be8-9fe5-4210-ba7f-9538bbea79b8","Type":"ContainerStarted","Data":"a4be2f0a46e752b5a4cf5c5a8ee9570c2f861b5f21a33e8edcfdd8d3e1e8c09e"} Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.830008 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" event={"ID":"d2670be8-9fe5-4210-ba7f-9538bbea79b8","Type":"ContainerStarted","Data":"ad9575ce1ac4e7520b4c786867f3eedc530925216c1c234236965b5291a2ef97"} Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.839134 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cae3ea96-ea38-44d1-818d-3cf8426dcf7b","Type":"ContainerStarted","Data":"c1a663459359cd62b04d4575b28c23afc91e593e55409fab57cd87a24fa3568a"} Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.839258 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cae3ea96-ea38-44d1-818d-3cf8426dcf7b","Type":"ContainerStarted","Data":"a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de"} Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.842082 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fx79t" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.842459 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f7jqz" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.844278 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s6dqc" podUID="3836bfda-f858-413a-b552-af4e679e5d77" Dec 03 20:41:41 crc kubenswrapper[4765]: E1203 20:41:41.853556 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bhtjn" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.862811 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.862791933 podStartE2EDuration="8.862791933s" podCreationTimestamp="2025-12-03 20:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:41:41.842659368 +0000 UTC m=+199.773204539" watchObservedRunningTime="2025-12-03 20:41:41.862791933 +0000 UTC m=+199.793337084" Dec 03 20:41:41 crc kubenswrapper[4765]: I1203 20:41:41.900207 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=13.900191078 podStartE2EDuration="13.900191078s" podCreationTimestamp="2025-12-03 20:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:41:41.894074274 +0000 UTC m=+199.824619425" watchObservedRunningTime="2025-12-03 20:41:41.900191078 +0000 UTC m=+199.830736219" Dec 03 20:41:42 crc kubenswrapper[4765]: I1203 20:41:42.843974 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9bhn8" event={"ID":"d2670be8-9fe5-4210-ba7f-9538bbea79b8","Type":"ContainerStarted","Data":"f54217f31bf9106542b17481042045ef7c68f43dd7a32dea3b3d92c711f3a5d4"} Dec 03 20:41:42 crc kubenswrapper[4765]: I1203 20:41:42.845549 4765 generic.go:334] "Generic (PLEG): container finished" podID="cae3ea96-ea38-44d1-818d-3cf8426dcf7b" containerID="c1a663459359cd62b04d4575b28c23afc91e593e55409fab57cd87a24fa3568a" exitCode=0 Dec 03 20:41:42 crc kubenswrapper[4765]: I1203 20:41:42.845688 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cae3ea96-ea38-44d1-818d-3cf8426dcf7b","Type":"ContainerDied","Data":"c1a663459359cd62b04d4575b28c23afc91e593e55409fab57cd87a24fa3568a"} Dec 03 20:41:42 crc kubenswrapper[4765]: I1203 20:41:42.865945 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9bhn8" podStartSLOduration=181.865919686 podStartE2EDuration="3m1.865919686s" podCreationTimestamp="2025-12-03 20:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:41:42.861170233 +0000 UTC m=+200.791715394" watchObservedRunningTime="2025-12-03 20:41:42.865919686 +0000 UTC m=+200.796464837" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.048774 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.144410 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir\") pod \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.144530 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access\") pod \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\" (UID: \"cae3ea96-ea38-44d1-818d-3cf8426dcf7b\") " Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.144581 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cae3ea96-ea38-44d1-818d-3cf8426dcf7b" (UID: "cae3ea96-ea38-44d1-818d-3cf8426dcf7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.144942 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.151950 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cae3ea96-ea38-44d1-818d-3cf8426dcf7b" (UID: "cae3ea96-ea38-44d1-818d-3cf8426dcf7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.246005 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cae3ea96-ea38-44d1-818d-3cf8426dcf7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.856866 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cae3ea96-ea38-44d1-818d-3cf8426dcf7b","Type":"ContainerDied","Data":"a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de"} Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.857172 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7659ed3c48dbb21e9cdc4f0a608d64d6782d1bab00eb6fa354fd4a24e31e1de" Dec 03 20:41:44 crc kubenswrapper[4765]: I1203 20:41:44.856921 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 20:41:50 crc kubenswrapper[4765]: I1203 20:41:50.888154 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerStarted","Data":"0d1c640e3902026e16fc1d7ff9f8d51db75b2034b86bc11f455aff7447e3945d"} Dec 03 20:41:51 crc kubenswrapper[4765]: I1203 20:41:51.895139 4765 generic.go:334] "Generic (PLEG): container finished" podID="c036902a-7c68-473e-966f-c5d36930fbaf" containerID="0d1c640e3902026e16fc1d7ff9f8d51db75b2034b86bc11f455aff7447e3945d" exitCode=0 Dec 03 20:41:51 crc kubenswrapper[4765]: I1203 20:41:51.895229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerDied","Data":"0d1c640e3902026e16fc1d7ff9f8d51db75b2034b86bc11f455aff7447e3945d"} Dec 03 20:41:51 crc kubenswrapper[4765]: I1203 20:41:51.897591 4765 generic.go:334] "Generic (PLEG): container finished" podID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerID="292e74fc9e4f40c0b24060064838bbededa5afd013bd779b7cc20f72343f8806" exitCode=0 Dec 03 20:41:51 crc kubenswrapper[4765]: I1203 20:41:51.897636 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerDied","Data":"292e74fc9e4f40c0b24060064838bbededa5afd013bd779b7cc20f72343f8806"} Dec 03 20:41:52 crc kubenswrapper[4765]: I1203 20:41:52.914421 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerStarted","Data":"498eaa2db8fb93d69efc6b282096cbe5f04c893be29536a5b4d04c2338e7aacd"} Dec 03 20:41:52 crc kubenswrapper[4765]: I1203 20:41:52.916711 4765 generic.go:334] "Generic (PLEG): container finished" podID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerID="f8f7ed21bc5785878dc162e7a972d3b2f97163b466dc2982dd20aa77a7cc3d67" exitCode=0 Dec 03 20:41:52 crc kubenswrapper[4765]: I1203 20:41:52.916775 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerDied","Data":"f8f7ed21bc5785878dc162e7a972d3b2f97163b466dc2982dd20aa77a7cc3d67"} Dec 03 20:41:52 crc kubenswrapper[4765]: I1203 20:41:52.920662 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerStarted","Data":"62f12689e7f01aff8cf6e03e1406eb13cfbdd67ad642bbdf36ee81a241b794a6"} Dec 03 20:41:52 crc kubenswrapper[4765]: I1203 20:41:52.972619 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ft2xt" podStartSLOduration=3.84653148 podStartE2EDuration="1m7.972596765s" podCreationTimestamp="2025-12-03 20:40:45 +0000 UTC" firstStartedPulling="2025-12-03 20:40:48.29354018 +0000 UTC m=+146.224085331" lastFinishedPulling="2025-12-03 20:41:52.419605465 +0000 UTC m=+210.350150616" observedRunningTime="2025-12-03 20:41:52.96115666 +0000 UTC m=+210.891701831" watchObservedRunningTime="2025-12-03 20:41:52.972596765 +0000 UTC m=+210.903141916" Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.930516 4765 generic.go:334] "Generic (PLEG): container finished" podID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerID="3ef4d81f558a6d24d9b5f31cabd9cc6adb4e8f0d79a05748f908f5dc1dc72526" exitCode=0 Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.930557 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerDied","Data":"3ef4d81f558a6d24d9b5f31cabd9cc6adb4e8f0d79a05748f908f5dc1dc72526"} Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.933234 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerStarted","Data":"647e0606e97d3b0895334e928bb8ed0458adb8eff549fab2bfc19b5f36b4b729"} Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.935945 4765 generic.go:334] "Generic (PLEG): container finished" podID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerID="01af2f90da24c377591a7d8183bec0756a7f75b78fb7d4f6b9c1c149452aa9ad" exitCode=0 Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.935978 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerDied","Data":"01af2f90da24c377591a7d8183bec0756a7f75b78fb7d4f6b9c1c149452aa9ad"} Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.947562 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kf8h6" podStartSLOduration=4.777087186 podStartE2EDuration="1m8.94754673s" podCreationTimestamp="2025-12-03 20:40:45 +0000 UTC" firstStartedPulling="2025-12-03 20:40:48.277579796 +0000 UTC m=+146.208124947" lastFinishedPulling="2025-12-03 20:41:52.44803935 +0000 UTC m=+210.378584491" observedRunningTime="2025-12-03 20:41:53.045500448 +0000 UTC m=+210.976045599" watchObservedRunningTime="2025-12-03 20:41:53.94754673 +0000 UTC m=+211.878091881" Dec 03 20:41:53 crc kubenswrapper[4765]: I1203 20:41:53.979890 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pw7wt" podStartSLOduration=3.896082836 podStartE2EDuration="1m8.979874522s" podCreationTimestamp="2025-12-03 20:40:45 +0000 UTC" firstStartedPulling="2025-12-03 20:40:48.265834912 +0000 UTC m=+146.196380063" lastFinishedPulling="2025-12-03 20:41:53.349626598 +0000 UTC m=+211.280171749" observedRunningTime="2025-12-03 20:41:53.977636064 +0000 UTC m=+211.908181215" watchObservedRunningTime="2025-12-03 20:41:53.979874522 +0000 UTC m=+211.910419673" Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.798812 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.799171 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.799222 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.799744 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.799858 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d" gracePeriod=600 Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.944898 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerStarted","Data":"38339b66c0c0e598d322cac1bff9960935c0da601f1bc1d2946ddb09f913bbb7"} Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.946999 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d" exitCode=0 Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.947062 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d"} Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.949691 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerStarted","Data":"52456fcb9c6b96a52aaa6114c247ffdeb9b8551ade88a823ad6b6a93e50d4746"} Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.978204 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fx79t" podStartSLOduration=4.512266713 podStartE2EDuration="1m10.978185361s" podCreationTimestamp="2025-12-03 20:40:44 +0000 UTC" firstStartedPulling="2025-12-03 20:40:48.280364105 +0000 UTC m=+146.210909256" lastFinishedPulling="2025-12-03 20:41:54.746282753 +0000 UTC m=+212.676827904" observedRunningTime="2025-12-03 20:41:54.97387186 +0000 UTC m=+212.904417031" watchObservedRunningTime="2025-12-03 20:41:54.978185361 +0000 UTC m=+212.908730512" Dec 03 20:41:54 crc kubenswrapper[4765]: I1203 20:41:54.993069 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-945qj" podStartSLOduration=2.969185484 podStartE2EDuration="1m7.993053008s" podCreationTimestamp="2025-12-03 20:40:47 +0000 UTC" firstStartedPulling="2025-12-03 20:40:49.328338311 +0000 UTC m=+147.258883462" lastFinishedPulling="2025-12-03 20:41:54.352205835 +0000 UTC m=+212.282750986" observedRunningTime="2025-12-03 20:41:54.990410098 +0000 UTC m=+212.920955249" watchObservedRunningTime="2025-12-03 20:41:54.993053008 +0000 UTC m=+212.923598159" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.369735 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.369937 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.741504 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.742350 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.802807 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.957463 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec"} Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.958861 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerStarted","Data":"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202"} Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.972505 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:41:55 crc kubenswrapper[4765]: I1203 20:41:55.972570 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.013692 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.149827 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.149876 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.182242 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.432198 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fx79t" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="registry-server" probeResult="failure" output=< Dec 03 20:41:56 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 20:41:56 crc kubenswrapper[4765]: > Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.966579 4765 generic.go:334] "Generic (PLEG): container finished" podID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerID="58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202" exitCode=0 Dec 03 20:41:56 crc kubenswrapper[4765]: I1203 20:41:56.967008 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerDied","Data":"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202"} Dec 03 20:41:57 crc kubenswrapper[4765]: I1203 20:41:57.974687 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerStarted","Data":"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6"} Dec 03 20:41:58 crc kubenswrapper[4765]: I1203 20:41:58.099776 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:41:58 crc kubenswrapper[4765]: I1203 20:41:58.099838 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:41:58 crc kubenswrapper[4765]: I1203 20:41:58.154359 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:41:58 crc kubenswrapper[4765]: I1203 20:41:58.980443 4765 generic.go:334] "Generic (PLEG): container finished" podID="3836bfda-f858-413a-b552-af4e679e5d77" containerID="6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6" exitCode=0 Dec 03 20:41:58 crc kubenswrapper[4765]: I1203 20:41:58.980543 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerDied","Data":"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6"} Dec 03 20:42:05 crc kubenswrapper[4765]: I1203 20:42:05.412494 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:42:05 crc kubenswrapper[4765]: I1203 20:42:05.457670 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:42:05 crc kubenswrapper[4765]: I1203 20:42:05.781810 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:42:06 crc kubenswrapper[4765]: I1203 20:42:06.064014 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:42:06 crc kubenswrapper[4765]: I1203 20:42:06.211780 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:42:07 crc kubenswrapper[4765]: I1203 20:42:07.048257 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:42:07 crc kubenswrapper[4765]: I1203 20:42:07.048470 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kf8h6" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="registry-server" containerID="cri-o://62f12689e7f01aff8cf6e03e1406eb13cfbdd67ad642bbdf36ee81a241b794a6" gracePeriod=2 Dec 03 20:42:08 crc kubenswrapper[4765]: I1203 20:42:08.139225 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:42:08 crc kubenswrapper[4765]: I1203 20:42:08.451082 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:42:08 crc kubenswrapper[4765]: I1203 20:42:08.451401 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ft2xt" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="registry-server" containerID="cri-o://498eaa2db8fb93d69efc6b282096cbe5f04c893be29536a5b4d04c2338e7aacd" gracePeriod=2 Dec 03 20:42:09 crc kubenswrapper[4765]: I1203 20:42:09.036950 4765 generic.go:334] "Generic (PLEG): container finished" podID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerID="62f12689e7f01aff8cf6e03e1406eb13cfbdd67ad642bbdf36ee81a241b794a6" exitCode=0 Dec 03 20:42:09 crc kubenswrapper[4765]: I1203 20:42:09.037011 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerDied","Data":"62f12689e7f01aff8cf6e03e1406eb13cfbdd67ad642bbdf36ee81a241b794a6"} Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.048494 4765 generic.go:334] "Generic (PLEG): container finished" podID="c036902a-7c68-473e-966f-c5d36930fbaf" containerID="498eaa2db8fb93d69efc6b282096cbe5f04c893be29536a5b4d04c2338e7aacd" exitCode=0 Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.048907 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerDied","Data":"498eaa2db8fb93d69efc6b282096cbe5f04c893be29536a5b4d04c2338e7aacd"} Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.328477 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.456176 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content\") pod \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.456242 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v82l6\" (UniqueName: \"kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6\") pod \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.456351 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities\") pod \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\" (UID: \"1c798c04-ea6b-4b91-8ad7-f42df24d0558\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.457919 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities" (OuterVolumeSpecName: "utilities") pod "1c798c04-ea6b-4b91-8ad7-f42df24d0558" (UID: "1c798c04-ea6b-4b91-8ad7-f42df24d0558"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.466074 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6" (OuterVolumeSpecName: "kube-api-access-v82l6") pod "1c798c04-ea6b-4b91-8ad7-f42df24d0558" (UID: "1c798c04-ea6b-4b91-8ad7-f42df24d0558"). InnerVolumeSpecName "kube-api-access-v82l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.489980 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.522128 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c798c04-ea6b-4b91-8ad7-f42df24d0558" (UID: "1c798c04-ea6b-4b91-8ad7-f42df24d0558"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557268 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities\") pod \"c036902a-7c68-473e-966f-c5d36930fbaf\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557348 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content\") pod \"c036902a-7c68-473e-966f-c5d36930fbaf\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557457 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hbn\" (UniqueName: \"kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn\") pod \"c036902a-7c68-473e-966f-c5d36930fbaf\" (UID: \"c036902a-7c68-473e-966f-c5d36930fbaf\") " Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557651 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557663 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c798c04-ea6b-4b91-8ad7-f42df24d0558-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557674 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v82l6\" (UniqueName: \"kubernetes.io/projected/1c798c04-ea6b-4b91-8ad7-f42df24d0558-kube-api-access-v82l6\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.557927 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities" (OuterVolumeSpecName: "utilities") pod "c036902a-7c68-473e-966f-c5d36930fbaf" (UID: "c036902a-7c68-473e-966f-c5d36930fbaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.571967 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn" (OuterVolumeSpecName: "kube-api-access-97hbn") pod "c036902a-7c68-473e-966f-c5d36930fbaf" (UID: "c036902a-7c68-473e-966f-c5d36930fbaf"). InnerVolumeSpecName "kube-api-access-97hbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.632188 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c036902a-7c68-473e-966f-c5d36930fbaf" (UID: "c036902a-7c68-473e-966f-c5d36930fbaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.659050 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hbn\" (UniqueName: \"kubernetes.io/projected/c036902a-7c68-473e-966f-c5d36930fbaf-kube-api-access-97hbn\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.659092 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.659102 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c036902a-7c68-473e-966f-c5d36930fbaf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.848371 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:42:10 crc kubenswrapper[4765]: I1203 20:42:10.848670 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-945qj" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="registry-server" containerID="cri-o://52456fcb9c6b96a52aaa6114c247ffdeb9b8551ade88a823ad6b6a93e50d4746" gracePeriod=2 Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.054974 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ft2xt" event={"ID":"c036902a-7c68-473e-966f-c5d36930fbaf","Type":"ContainerDied","Data":"18fae812b8162c12f3fe2225a45bfa9f5ed9fcf1ab71c1d65b92db3120848455"} Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.055232 4765 scope.go:117] "RemoveContainer" containerID="498eaa2db8fb93d69efc6b282096cbe5f04c893be29536a5b4d04c2338e7aacd" Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.055039 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ft2xt" Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.056975 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerStarted","Data":"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521"} Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.058871 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerID="8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a" exitCode=0 Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.058930 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerDied","Data":"8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a"} Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.061030 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8h6" event={"ID":"1c798c04-ea6b-4b91-8ad7-f42df24d0558","Type":"ContainerDied","Data":"dd1d9dc4905b588dd32ab8e6f659e5b14ef9d94ac87e00df6adab4f53bd0808f"} Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.061105 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8h6" Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.095485 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.109053 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kf8h6"] Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.113140 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:42:11 crc kubenswrapper[4765]: I1203 20:42:11.116837 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ft2xt"] Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.068699 4765 generic.go:334] "Generic (PLEG): container finished" podID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerID="52456fcb9c6b96a52aaa6114c247ffdeb9b8551ade88a823ad6b6a93e50d4746" exitCode=0 Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.069803 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerDied","Data":"52456fcb9c6b96a52aaa6114c247ffdeb9b8551ade88a823ad6b6a93e50d4746"} Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.102897 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f7jqz" podStartSLOduration=5.215302594 podStartE2EDuration="1m24.102865042s" podCreationTimestamp="2025-12-03 20:40:48 +0000 UTC" firstStartedPulling="2025-12-03 20:40:50.365689853 +0000 UTC m=+148.296235004" lastFinishedPulling="2025-12-03 20:42:09.253252301 +0000 UTC m=+227.183797452" observedRunningTime="2025-12-03 20:42:12.092785859 +0000 UTC m=+230.023331050" watchObservedRunningTime="2025-12-03 20:42:12.102865042 +0000 UTC m=+230.033410233" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.173482 4765 scope.go:117] "RemoveContainer" containerID="0d1c640e3902026e16fc1d7ff9f8d51db75b2034b86bc11f455aff7447e3945d" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.240306 4765 scope.go:117] "RemoveContainer" containerID="aaa7f0decb0eb6caee923fe9aecfd1616ad07a648f74874866781e765a0109a1" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.266454 4765 scope.go:117] "RemoveContainer" containerID="62f12689e7f01aff8cf6e03e1406eb13cfbdd67ad642bbdf36ee81a241b794a6" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.288599 4765 scope.go:117] "RemoveContainer" containerID="292e74fc9e4f40c0b24060064838bbededa5afd013bd779b7cc20f72343f8806" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.370635 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" path="/var/lib/kubelet/pods/1c798c04-ea6b-4b91-8ad7-f42df24d0558/volumes" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.371445 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" path="/var/lib/kubelet/pods/c036902a-7c68-473e-966f-c5d36930fbaf/volumes" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.435512 4765 scope.go:117] "RemoveContainer" containerID="5fc8569ec182a279b33bcbe91748d3b86b92593f423695fbcd0f9cc05831259c" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.467813 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.589271 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities\") pod \"b8b3270d-0399-439e-b1bc-7d1628092bbf\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.589370 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djtz\" (UniqueName: \"kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz\") pod \"b8b3270d-0399-439e-b1bc-7d1628092bbf\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.589434 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content\") pod \"b8b3270d-0399-439e-b1bc-7d1628092bbf\" (UID: \"b8b3270d-0399-439e-b1bc-7d1628092bbf\") " Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.590371 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities" (OuterVolumeSpecName: "utilities") pod "b8b3270d-0399-439e-b1bc-7d1628092bbf" (UID: "b8b3270d-0399-439e-b1bc-7d1628092bbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.597398 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz" (OuterVolumeSpecName: "kube-api-access-7djtz") pod "b8b3270d-0399-439e-b1bc-7d1628092bbf" (UID: "b8b3270d-0399-439e-b1bc-7d1628092bbf"). InnerVolumeSpecName "kube-api-access-7djtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.605643 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8b3270d-0399-439e-b1bc-7d1628092bbf" (UID: "b8b3270d-0399-439e-b1bc-7d1628092bbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.691073 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djtz\" (UniqueName: \"kubernetes.io/projected/b8b3270d-0399-439e-b1bc-7d1628092bbf-kube-api-access-7djtz\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.691114 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:12 crc kubenswrapper[4765]: I1203 20:42:12.691124 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8b3270d-0399-439e-b1bc-7d1628092bbf-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.074561 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerStarted","Data":"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d"} Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.076631 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerStarted","Data":"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83"} Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.079812 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-945qj" event={"ID":"b8b3270d-0399-439e-b1bc-7d1628092bbf","Type":"ContainerDied","Data":"bfb1f87f6ed117b265abc48ef6ff3f9585527a7857b717a894a25b55f3bb4271"} Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.079823 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-945qj" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.079856 4765 scope.go:117] "RemoveContainer" containerID="52456fcb9c6b96a52aaa6114c247ffdeb9b8551ade88a823ad6b6a93e50d4746" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.090839 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bhtjn" podStartSLOduration=2.891337023 podStartE2EDuration="1m26.090821429s" podCreationTimestamp="2025-12-03 20:40:47 +0000 UTC" firstStartedPulling="2025-12-03 20:40:49.363151111 +0000 UTC m=+147.293696262" lastFinishedPulling="2025-12-03 20:42:12.562635517 +0000 UTC m=+230.493180668" observedRunningTime="2025-12-03 20:42:13.089255872 +0000 UTC m=+231.019801053" watchObservedRunningTime="2025-12-03 20:42:13.090821429 +0000 UTC m=+231.021366580" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.091142 4765 scope.go:117] "RemoveContainer" containerID="3ef4d81f558a6d24d9b5f31cabd9cc6adb4e8f0d79a05748f908f5dc1dc72526" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.111741 4765 scope.go:117] "RemoveContainer" containerID="f1538fb9eeb1285f697489f2413098d7e49d93dafc31635893ccaf08aab97f6d" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.114392 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s6dqc" podStartSLOduration=3.282609042 podStartE2EDuration="1m25.114374728s" podCreationTimestamp="2025-12-03 20:40:48 +0000 UTC" firstStartedPulling="2025-12-03 20:40:50.408675477 +0000 UTC m=+148.339220628" lastFinishedPulling="2025-12-03 20:42:12.240441163 +0000 UTC m=+230.170986314" observedRunningTime="2025-12-03 20:42:13.113647065 +0000 UTC m=+231.044192216" watchObservedRunningTime="2025-12-03 20:42:13.114374728 +0000 UTC m=+231.044919879" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.132605 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.136263 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-945qj"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.321594 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.321835 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pw7wt" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="registry-server" containerID="cri-o://647e0606e97d3b0895334e928bb8ed0458adb8eff549fab2bfc19b5f36b4b729" gracePeriod=30 Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.327632 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.327848 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fx79t" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="registry-server" containerID="cri-o://38339b66c0c0e598d322cac1bff9960935c0da601f1bc1d2946ddb09f913bbb7" gracePeriod=30 Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.333974 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.334157 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" containerID="cri-o://f9bb7a5507580e792bbc8d8d1c1f297433d34d67ca76237ac690d10015d62bc3" gracePeriod=30 Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.351627 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.353831 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztctx"] Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354025 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354041 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354051 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354057 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354064 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354070 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354081 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae3ea96-ea38-44d1-818d-3cf8426dcf7b" containerName="pruner" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354086 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae3ea96-ea38-44d1-818d-3cf8426dcf7b" containerName="pruner" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354093 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354099 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354109 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354114 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354124 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354131 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354143 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354148 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354156 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354162 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="extract-utilities" Dec 03 20:42:13 crc kubenswrapper[4765]: E1203 20:42:13.354169 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354174 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="extract-content" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354274 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354284 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae3ea96-ea38-44d1-818d-3cf8426dcf7b" containerName="pruner" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354309 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c798c04-ea6b-4b91-8ad7-f42df24d0558" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354318 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c036902a-7c68-473e-966f-c5d36930fbaf" containerName="registry-server" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.354655 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.360128 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.360471 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f7jqz" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="registry-server" containerID="cri-o://7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521" gracePeriod=30 Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.365217 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztctx"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.371286 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.399070 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9vr\" (UniqueName: \"kubernetes.io/projected/72a5b180-7b23-4bfd-a10b-c35f73c732aa-kube-api-access-mh9vr\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.399134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.399153 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.500568 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9vr\" (UniqueName: \"kubernetes.io/projected/72a5b180-7b23-4bfd-a10b-c35f73c732aa-kube-api-access-mh9vr\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.500630 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.500649 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.501768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.507683 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72a5b180-7b23-4bfd-a10b-c35f73c732aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.516917 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9vr\" (UniqueName: \"kubernetes.io/projected/72a5b180-7b23-4bfd-a10b-c35f73c732aa-kube-api-access-mh9vr\") pod \"marketplace-operator-79b997595-ztctx\" (UID: \"72a5b180-7b23-4bfd-a10b-c35f73c732aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.671170 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.743890 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f7jqz_f3f9dcd4-a267-4a22-9cf4-6caa549e30d0/registry-server/0.log" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.755013 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.777518 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2hvlc"] Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.806641 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content\") pod \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.806767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rnzk\" (UniqueName: \"kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk\") pod \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.806836 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities\") pod \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\" (UID: \"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0\") " Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.807920 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities" (OuterVolumeSpecName: "utilities") pod "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" (UID: "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.811574 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk" (OuterVolumeSpecName: "kube-api-access-4rnzk") pod "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" (UID: "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0"). InnerVolumeSpecName "kube-api-access-4rnzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.908437 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.908476 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rnzk\" (UniqueName: \"kubernetes.io/projected/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-kube-api-access-4rnzk\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:13 crc kubenswrapper[4765]: I1203 20:42:13.940329 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" (UID: "f3f9dcd4-a267-4a22-9cf4-6caa549e30d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.010013 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.092192 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f7jqz_f3f9dcd4-a267-4a22-9cf4-6caa549e30d0/registry-server/0.log" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.094905 4765 generic.go:334] "Generic (PLEG): container finished" podID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerID="7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521" exitCode=1 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.095004 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerDied","Data":"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521"} Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.095038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7jqz" event={"ID":"f3f9dcd4-a267-4a22-9cf4-6caa549e30d0","Type":"ContainerDied","Data":"d606e26286525544feb9817d383b5ed874fd5cc6bdc9fc36c993e3f765a7323b"} Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.095059 4765 scope.go:117] "RemoveContainer" containerID="7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.095200 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7jqz" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.147106 4765 generic.go:334] "Generic (PLEG): container finished" podID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerID="647e0606e97d3b0895334e928bb8ed0458adb8eff549fab2bfc19b5f36b4b729" exitCode=0 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.147358 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerDied","Data":"647e0606e97d3b0895334e928bb8ed0458adb8eff549fab2bfc19b5f36b4b729"} Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.154560 4765 scope.go:117] "RemoveContainer" containerID="58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.155463 4765 generic.go:334] "Generic (PLEG): container finished" podID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerID="38339b66c0c0e598d322cac1bff9960935c0da601f1bc1d2946ddb09f913bbb7" exitCode=0 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.155537 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerDied","Data":"38339b66c0c0e598d322cac1bff9960935c0da601f1bc1d2946ddb09f913bbb7"} Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.160047 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.165653 4765 generic.go:334] "Generic (PLEG): container finished" podID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerID="f9bb7a5507580e792bbc8d8d1c1f297433d34d67ca76237ac690d10015d62bc3" exitCode=0 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.165716 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" event={"ID":"d04f847d-2261-48b3-9314-7b3b1cb8af38","Type":"ContainerDied","Data":"f9bb7a5507580e792bbc8d8d1c1f297433d34d67ca76237ac690d10015d62bc3"} Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.172985 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bhtjn" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="registry-server" containerID="cri-o://de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d" gracePeriod=30 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.173389 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s6dqc" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="registry-server" containerID="cri-o://e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83" gracePeriod=30 Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.190261 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f7jqz"] Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.195527 4765 scope.go:117] "RemoveContainer" containerID="fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.238892 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ztctx"] Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.338926 4765 scope.go:117] "RemoveContainer" containerID="7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521" Dec 03 20:42:14 crc kubenswrapper[4765]: E1203 20:42:14.339532 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521\": container with ID starting with 7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521 not found: ID does not exist" containerID="7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.339581 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521"} err="failed to get container status \"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521\": rpc error: code = NotFound desc = could not find container \"7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521\": container with ID starting with 7a0cb439cb05b53552c60f34521ed33cbc1d7d5ecd98f16b355a55ab8277b521 not found: ID does not exist" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.339612 4765 scope.go:117] "RemoveContainer" containerID="58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202" Dec 03 20:42:14 crc kubenswrapper[4765]: E1203 20:42:14.339975 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202\": container with ID starting with 58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202 not found: ID does not exist" containerID="58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.339994 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202"} err="failed to get container status \"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202\": rpc error: code = NotFound desc = could not find container \"58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202\": container with ID starting with 58ca9cff8c00dc3a19623bd29bff96b4279101294350d778a442cf663e76a202 not found: ID does not exist" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.340010 4765 scope.go:117] "RemoveContainer" containerID="fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f" Dec 03 20:42:14 crc kubenswrapper[4765]: E1203 20:42:14.340489 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f\": container with ID starting with fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f not found: ID does not exist" containerID="fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.340506 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f"} err="failed to get container status \"fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f\": rpc error: code = NotFound desc = could not find container \"fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f\": container with ID starting with fe1cd028d816f0793a6e3c5e23ac960f89e256b7f8ddb9264a4fbaa9590df16f not found: ID does not exist" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.368030 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b3270d-0399-439e-b1bc-7d1628092bbf" path="/var/lib/kubelet/pods/b8b3270d-0399-439e-b1bc-7d1628092bbf/volumes" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.368627 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" path="/var/lib/kubelet/pods/f3f9dcd4-a267-4a22-9cf4-6caa549e30d0/volumes" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.421845 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.440719 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.484375 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities\") pod \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518438 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99qlm\" (UniqueName: \"kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm\") pod \"d04f847d-2261-48b3-9314-7b3b1cb8af38\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518470 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities\") pod \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518512 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5mfr\" (UniqueName: \"kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr\") pod \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca\") pod \"d04f847d-2261-48b3-9314-7b3b1cb8af38\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518602 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvhfq\" (UniqueName: \"kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq\") pod \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518629 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content\") pod \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\" (UID: \"e9b28d97-921e-45dd-bb19-ff02939e1bf7\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518650 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content\") pod \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\" (UID: \"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.518703 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics\") pod \"d04f847d-2261-48b3-9314-7b3b1cb8af38\" (UID: \"d04f847d-2261-48b3-9314-7b3b1cb8af38\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.522431 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d04f847d-2261-48b3-9314-7b3b1cb8af38" (UID: "d04f847d-2261-48b3-9314-7b3b1cb8af38"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.524423 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities" (OuterVolumeSpecName: "utilities") pod "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" (UID: "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.530403 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities" (OuterVolumeSpecName: "utilities") pod "e9b28d97-921e-45dd-bb19-ff02939e1bf7" (UID: "e9b28d97-921e-45dd-bb19-ff02939e1bf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.531341 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d04f847d-2261-48b3-9314-7b3b1cb8af38" (UID: "d04f847d-2261-48b3-9314-7b3b1cb8af38"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.533286 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm" (OuterVolumeSpecName: "kube-api-access-99qlm") pod "d04f847d-2261-48b3-9314-7b3b1cb8af38" (UID: "d04f847d-2261-48b3-9314-7b3b1cb8af38"). InnerVolumeSpecName "kube-api-access-99qlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.533429 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq" (OuterVolumeSpecName: "kube-api-access-kvhfq") pod "e9b28d97-921e-45dd-bb19-ff02939e1bf7" (UID: "e9b28d97-921e-45dd-bb19-ff02939e1bf7"). InnerVolumeSpecName "kube-api-access-kvhfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.533538 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr" (OuterVolumeSpecName: "kube-api-access-g5mfr") pod "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" (UID: "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd"). InnerVolumeSpecName "kube-api-access-g5mfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.567772 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.613144 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" (UID: "a6cc66c6-bb08-4543-bc9b-0f59d5a893dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.620711 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities\") pod \"cf4c5db7-97af-4db6-8f56-875db60da71b\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.620847 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content\") pod \"cf4c5db7-97af-4db6-8f56-875db60da71b\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.620877 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw978\" (UniqueName: \"kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978\") pod \"cf4c5db7-97af-4db6-8f56-875db60da71b\" (UID: \"cf4c5db7-97af-4db6-8f56-875db60da71b\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621112 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5mfr\" (UniqueName: \"kubernetes.io/projected/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-kube-api-access-g5mfr\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621130 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621142 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvhfq\" (UniqueName: \"kubernetes.io/projected/e9b28d97-921e-45dd-bb19-ff02939e1bf7-kube-api-access-kvhfq\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621151 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621161 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d04f847d-2261-48b3-9314-7b3b1cb8af38-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621170 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621178 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99qlm\" (UniqueName: \"kubernetes.io/projected/d04f847d-2261-48b3-9314-7b3b1cb8af38-kube-api-access-99qlm\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621186 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.621546 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities" (OuterVolumeSpecName: "utilities") pod "cf4c5db7-97af-4db6-8f56-875db60da71b" (UID: "cf4c5db7-97af-4db6-8f56-875db60da71b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.625013 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978" (OuterVolumeSpecName: "kube-api-access-sw978") pod "cf4c5db7-97af-4db6-8f56-875db60da71b" (UID: "cf4c5db7-97af-4db6-8f56-875db60da71b"). InnerVolumeSpecName "kube-api-access-sw978". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.636524 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b28d97-921e-45dd-bb19-ff02939e1bf7" (UID: "e9b28d97-921e-45dd-bb19-ff02939e1bf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.644136 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf4c5db7-97af-4db6-8f56-875db60da71b" (UID: "cf4c5db7-97af-4db6-8f56-875db60da71b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.722793 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.723100 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b28d97-921e-45dd-bb19-ff02939e1bf7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.723112 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4c5db7-97af-4db6-8f56-875db60da71b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.723125 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw978\" (UniqueName: \"kubernetes.io/projected/cf4c5db7-97af-4db6-8f56-875db60da71b-kube-api-access-sw978\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.849912 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s6dqc_3836bfda-f858-413a-b552-af4e679e5d77/registry-server/0.log" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.850690 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.926094 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content\") pod \"3836bfda-f858-413a-b552-af4e679e5d77\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.926154 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities\") pod \"3836bfda-f858-413a-b552-af4e679e5d77\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.926345 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp9j5\" (UniqueName: \"kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5\") pod \"3836bfda-f858-413a-b552-af4e679e5d77\" (UID: \"3836bfda-f858-413a-b552-af4e679e5d77\") " Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.926841 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities" (OuterVolumeSpecName: "utilities") pod "3836bfda-f858-413a-b552-af4e679e5d77" (UID: "3836bfda-f858-413a-b552-af4e679e5d77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:14 crc kubenswrapper[4765]: I1203 20:42:14.930091 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5" (OuterVolumeSpecName: "kube-api-access-qp9j5") pod "3836bfda-f858-413a-b552-af4e679e5d77" (UID: "3836bfda-f858-413a-b552-af4e679e5d77"). InnerVolumeSpecName "kube-api-access-qp9j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.027690 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp9j5\" (UniqueName: \"kubernetes.io/projected/3836bfda-f858-413a-b552-af4e679e5d77-kube-api-access-qp9j5\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.027728 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.057111 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3836bfda-f858-413a-b552-af4e679e5d77" (UID: "3836bfda-f858-413a-b552-af4e679e5d77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.129004 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3836bfda-f858-413a-b552-af4e679e5d77-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.186972 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s6dqc_3836bfda-f858-413a-b552-af4e679e5d77/registry-server/0.log" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.188569 4765 generic.go:334] "Generic (PLEG): container finished" podID="3836bfda-f858-413a-b552-af4e679e5d77" containerID="e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83" exitCode=1 Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.188631 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6dqc" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.188616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerDied","Data":"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.188868 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6dqc" event={"ID":"3836bfda-f858-413a-b552-af4e679e5d77","Type":"ContainerDied","Data":"24c015497dff49ecdf7090e9d612c1cab441ab75c9d03d6129815247b97910f9"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.188895 4765 scope.go:117] "RemoveContainer" containerID="e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.191497 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx79t" event={"ID":"e9b28d97-921e-45dd-bb19-ff02939e1bf7","Type":"ContainerDied","Data":"786dd190762b6899f0df5a5eb3e0ac0ff6559865e0ba1915bcec0e6213ce2e68"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.191589 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx79t" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.194391 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" event={"ID":"d04f847d-2261-48b3-9314-7b3b1cb8af38","Type":"ContainerDied","Data":"cbe8a9ec97755297c5af200aa8e713c66aa61570341243e01bd96bc22dd476a1"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.194474 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-984s7" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.199363 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" event={"ID":"72a5b180-7b23-4bfd-a10b-c35f73c732aa","Type":"ContainerStarted","Data":"2e6630748cc2fc47c7e9d290edb39c8aeb87eee8f15c4b0e09e6c9a87aff06b3"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.199400 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" event={"ID":"72a5b180-7b23-4bfd-a10b-c35f73c732aa","Type":"ContainerStarted","Data":"2d83797bae6ba4a1efedd8b4e07d815d10875a22f8df1a6c337a23975b036d25"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.200447 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.205366 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.206283 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7wt" event={"ID":"a6cc66c6-bb08-4543-bc9b-0f59d5a893dd","Type":"ContainerDied","Data":"9c7ef6b2b6491c42d7ba7fc0aa92c3ae916440bbfe5113ed9bd99c7cae1b5bd4"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.206398 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7wt" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.209717 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerID="de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d" exitCode=0 Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.209762 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerDied","Data":"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.209795 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhtjn" event={"ID":"cf4c5db7-97af-4db6-8f56-875db60da71b","Type":"ContainerDied","Data":"dbb0128bd11af427c67e6d3dd67a9899dc0e30b8f147aabab7799551d5243b8d"} Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.209864 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhtjn" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.219233 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" podStartSLOduration=2.21921346 podStartE2EDuration="2.21921346s" podCreationTimestamp="2025-12-03 20:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:42:15.217172118 +0000 UTC m=+233.147717279" watchObservedRunningTime="2025-12-03 20:42:15.21921346 +0000 UTC m=+233.149758611" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.250875 4765 scope.go:117] "RemoveContainer" containerID="6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.258426 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.265259 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-984s7"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.276915 4765 scope.go:117] "RemoveContainer" containerID="ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.280651 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.284360 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fx79t"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.292725 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.299606 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pw7wt"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.303383 4765 scope.go:117] "RemoveContainer" containerID="e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.304766 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83\": container with ID starting with e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83 not found: ID does not exist" containerID="e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.304818 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83"} err="failed to get container status \"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83\": rpc error: code = NotFound desc = could not find container \"e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83\": container with ID starting with e7f63deae50320f36a13f49f2e14e19e3af6d0a1ddcd2e08a9e5993d271acf83 not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.304847 4765 scope.go:117] "RemoveContainer" containerID="6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.305179 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6\": container with ID starting with 6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6 not found: ID does not exist" containerID="6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.305206 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6"} err="failed to get container status \"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6\": rpc error: code = NotFound desc = could not find container \"6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6\": container with ID starting with 6f45a1cb072ca268140ffe8c4e2c0c12a53752626859c56f2e16a0018476c2d6 not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.305230 4765 scope.go:117] "RemoveContainer" containerID="ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.305605 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59\": container with ID starting with ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59 not found: ID does not exist" containerID="ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.305630 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59"} err="failed to get container status \"ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59\": rpc error: code = NotFound desc = could not find container \"ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59\": container with ID starting with ed8944c70c12369671902420f1417422db558db3611ebef51168c6cffad10c59 not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.305648 4765 scope.go:117] "RemoveContainer" containerID="38339b66c0c0e598d322cac1bff9960935c0da601f1bc1d2946ddb09f913bbb7" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.308573 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.324158 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhtjn"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.327428 4765 scope.go:117] "RemoveContainer" containerID="01af2f90da24c377591a7d8183bec0756a7f75b78fb7d4f6b9c1c149452aa9ad" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.330063 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.334918 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s6dqc"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.346336 4765 scope.go:117] "RemoveContainer" containerID="89fa27c4912856d8e04d4b8b4147007ac88fc8853dc1ee2c92be02702bfdf5b3" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.358392 4765 scope.go:117] "RemoveContainer" containerID="f9bb7a5507580e792bbc8d8d1c1f297433d34d67ca76237ac690d10015d62bc3" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.369783 4765 scope.go:117] "RemoveContainer" containerID="647e0606e97d3b0895334e928bb8ed0458adb8eff549fab2bfc19b5f36b4b729" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.380607 4765 scope.go:117] "RemoveContainer" containerID="f8f7ed21bc5785878dc162e7a972d3b2f97163b466dc2982dd20aa77a7cc3d67" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.396763 4765 scope.go:117] "RemoveContainer" containerID="07e7f96603f33a013cedcfdf71cd6e251c85f1ffd8e1ee987bba79da8fa0c472" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.409938 4765 scope.go:117] "RemoveContainer" containerID="de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.424759 4765 scope.go:117] "RemoveContainer" containerID="8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.438115 4765 scope.go:117] "RemoveContainer" containerID="0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.451733 4765 scope.go:117] "RemoveContainer" containerID="de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.452003 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d\": container with ID starting with de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d not found: ID does not exist" containerID="de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.452032 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d"} err="failed to get container status \"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d\": rpc error: code = NotFound desc = could not find container \"de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d\": container with ID starting with de39000f6c1c01fba5310c693e545a010713a1ce18e4de221fe13b7f3ce4262d not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.452055 4765 scope.go:117] "RemoveContainer" containerID="8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.452289 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a\": container with ID starting with 8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a not found: ID does not exist" containerID="8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.452354 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a"} err="failed to get container status \"8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a\": rpc error: code = NotFound desc = could not find container \"8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a\": container with ID starting with 8b28d285ea498f0b2f63f5a96fd716095432a56be4b7c878168b2431b3094f4a not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.452383 4765 scope.go:117] "RemoveContainer" containerID="0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.452700 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340\": container with ID starting with 0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340 not found: ID does not exist" containerID="0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.452722 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340"} err="failed to get container status \"0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340\": rpc error: code = NotFound desc = could not find container \"0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340\": container with ID starting with 0a086ee50440259cbe1695a476ea0f7054c0a2977f8614ac1ba8f02ca8cee340 not found: ID does not exist" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864068 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftrfh"] Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864262 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864274 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864282 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864320 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864330 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864338 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864351 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864358 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864370 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864377 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864387 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864394 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864403 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864411 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864422 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864429 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864440 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864446 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864458 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864464 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864476 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864483 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864494 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864501 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864508 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864515 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864526 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864533 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="extract-content" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864544 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864551 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="extract-utilities" Dec 03 20:42:15 crc kubenswrapper[4765]: E1203 20:42:15.864559 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864566 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864666 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3836bfda-f858-413a-b552-af4e679e5d77" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864681 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864693 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864700 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864711 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f9dcd4-a267-4a22-9cf4-6caa549e30d0" containerName="registry-server" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.864721 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" containerName="marketplace-operator" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.865549 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.868578 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.874203 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftrfh"] Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.938345 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-catalog-content\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.938856 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-utilities\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:15 crc kubenswrapper[4765]: I1203 20:42:15.938906 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzbf\" (UniqueName: \"kubernetes.io/projected/80d51b27-d825-4e91-81bd-8e3297c4f550-kube-api-access-lmzbf\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.039878 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmzbf\" (UniqueName: \"kubernetes.io/projected/80d51b27-d825-4e91-81bd-8e3297c4f550-kube-api-access-lmzbf\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.040051 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-catalog-content\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.040146 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-utilities\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.040766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-utilities\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.041701 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d51b27-d825-4e91-81bd-8e3297c4f550-catalog-content\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.055763 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmzbf\" (UniqueName: \"kubernetes.io/projected/80d51b27-d825-4e91-81bd-8e3297c4f550-kube-api-access-lmzbf\") pod \"redhat-operators-ftrfh\" (UID: \"80d51b27-d825-4e91-81bd-8e3297c4f550\") " pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.196052 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.369355 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3836bfda-f858-413a-b552-af4e679e5d77" path="/var/lib/kubelet/pods/3836bfda-f858-413a-b552-af4e679e5d77/volumes" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.369954 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cc66c6-bb08-4543-bc9b-0f59d5a893dd" path="/var/lib/kubelet/pods/a6cc66c6-bb08-4543-bc9b-0f59d5a893dd/volumes" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.370526 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4c5db7-97af-4db6-8f56-875db60da71b" path="/var/lib/kubelet/pods/cf4c5db7-97af-4db6-8f56-875db60da71b/volumes" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.371577 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04f847d-2261-48b3-9314-7b3b1cb8af38" path="/var/lib/kubelet/pods/d04f847d-2261-48b3-9314-7b3b1cb8af38/volumes" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.371990 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b28d97-921e-45dd-bb19-ff02939e1bf7" path="/var/lib/kubelet/pods/e9b28d97-921e-45dd-bb19-ff02939e1bf7/volumes" Dec 03 20:42:16 crc kubenswrapper[4765]: I1203 20:42:16.404420 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftrfh"] Dec 03 20:42:16 crc kubenswrapper[4765]: W1203 20:42:16.409934 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d51b27_d825_4e91_81bd_8e3297c4f550.slice/crio-1f679aad63e3beb3445a73880e0646c92ec74c99332cf8f108a94da80c3262f4 WatchSource:0}: Error finding container 1f679aad63e3beb3445a73880e0646c92ec74c99332cf8f108a94da80c3262f4: Status 404 returned error can't find the container with id 1f679aad63e3beb3445a73880e0646c92ec74c99332cf8f108a94da80c3262f4 Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.236260 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrfh" event={"ID":"80d51b27-d825-4e91-81bd-8e3297c4f550","Type":"ContainerStarted","Data":"1f679aad63e3beb3445a73880e0646c92ec74c99332cf8f108a94da80c3262f4"} Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.257153 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.258899 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.260908 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.268710 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.361020 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.361081 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwx9c\" (UniqueName: \"kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.361110 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.462544 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.463154 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwx9c\" (UniqueName: \"kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.463084 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.463242 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.463582 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.482826 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwx9c\" (UniqueName: \"kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c\") pod \"community-operators-r6rmn\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:17 crc kubenswrapper[4765]: I1203 20:42:17.587662 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.007878 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.242292 4765 generic.go:334] "Generic (PLEG): container finished" podID="80d51b27-d825-4e91-81bd-8e3297c4f550" containerID="2feb241b2a545818bab1724c9aeb22eb8cac7b09f409676f5034929e11e0395e" exitCode=0 Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.242378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrfh" event={"ID":"80d51b27-d825-4e91-81bd-8e3297c4f550","Type":"ContainerDied","Data":"2feb241b2a545818bab1724c9aeb22eb8cac7b09f409676f5034929e11e0395e"} Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.244066 4765 generic.go:334] "Generic (PLEG): container finished" podID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerID="fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25" exitCode=0 Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.244112 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerDied","Data":"fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25"} Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.244146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerStarted","Data":"7c0e0f206b8906c05b88b1b404f264d52d5661a1b9505a974bdac4ca880f594f"} Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.260134 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k92tm"] Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.273909 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.276882 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.281810 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k92tm"] Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.373670 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcdf\" (UniqueName: \"kubernetes.io/projected/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-kube-api-access-dbcdf\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.373735 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-utilities\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.373762 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-catalog-content\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.475379 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-utilities\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.475442 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-catalog-content\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.475539 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcdf\" (UniqueName: \"kubernetes.io/projected/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-kube-api-access-dbcdf\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.475925 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-utilities\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.476047 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-catalog-content\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.509965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcdf\" (UniqueName: \"kubernetes.io/projected/aa9f4500-9c6f-4415-bea7-eebfda74d3ee-kube-api-access-dbcdf\") pod \"redhat-marketplace-k92tm\" (UID: \"aa9f4500-9c6f-4415-bea7-eebfda74d3ee\") " pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.586737 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:18 crc kubenswrapper[4765]: I1203 20:42:18.793368 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k92tm"] Dec 03 20:42:18 crc kubenswrapper[4765]: W1203 20:42:18.802224 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9f4500_9c6f_4415_bea7_eebfda74d3ee.slice/crio-9247ebc638829edf4e6ba57ce99a76f318184905c7622a4778c3d1b685432e1d WatchSource:0}: Error finding container 9247ebc638829edf4e6ba57ce99a76f318184905c7622a4778c3d1b685432e1d: Status 404 returned error can't find the container with id 9247ebc638829edf4e6ba57ce99a76f318184905c7622a4778c3d1b685432e1d Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.249423 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrfh" event={"ID":"80d51b27-d825-4e91-81bd-8e3297c4f550","Type":"ContainerStarted","Data":"a232439d2a49b07c17de2c600c9fc79d2e1e41fcc75421d40d405358b0a8c927"} Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.251982 4765 generic.go:334] "Generic (PLEG): container finished" podID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerID="aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e" exitCode=0 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.252039 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerDied","Data":"aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e"} Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.253192 4765 generic.go:334] "Generic (PLEG): container finished" podID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" containerID="f0ebf1eccb3201bbc95e33e244f4ce954f521cbf2a129f38c0c95407e6236dd5" exitCode=0 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.253232 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k92tm" event={"ID":"aa9f4500-9c6f-4415-bea7-eebfda74d3ee","Type":"ContainerDied","Data":"f0ebf1eccb3201bbc95e33e244f4ce954f521cbf2a129f38c0c95407e6236dd5"} Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.253257 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k92tm" event={"ID":"aa9f4500-9c6f-4415-bea7-eebfda74d3ee","Type":"ContainerStarted","Data":"9247ebc638829edf4e6ba57ce99a76f318184905c7622a4778c3d1b685432e1d"} Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.392488 4765 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.392835 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393109 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393127 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393146 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393155 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393165 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393172 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393186 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393194 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393188 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d" gracePeriod=15 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393218 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432" gracePeriod=15 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393264 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467" gracePeriod=15 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393360 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d" gracePeriod=15 Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393415 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0" gracePeriod=15 Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393207 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393460 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393485 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393493 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393706 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393721 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393732 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393743 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393754 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.393880 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393891 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.393977 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.397257 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.397854 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.401675 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.488860 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.488911 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.488930 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.488949 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.488989 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.489017 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.489032 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.489069 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589769 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589812 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589871 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589913 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589921 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589935 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589995 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589979 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.590012 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.590040 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.590041 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.589978 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: I1203 20:42:19.590070 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:19 crc kubenswrapper[4765]: E1203 20:42:19.650547 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-r6rmn.187dcf489555f40f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-r6rmn,UID:1feb87dd-af7d-4048-bf1c-df1541bb8301,APIVersion:v1,ResourceVersion:29394,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 395ms (395ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,LastTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.280474 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.283057 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.283941 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.283980 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.283991 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.284002 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d" exitCode=2 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.284039 4765 scope.go:117] "RemoveContainer" containerID="2199997733b2e8e1214e7b7e195e161bdd5ed8f8f8e1bc72d05e0f3415059ad6" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.286733 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerStarted","Data":"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9"} Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.287823 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.288824 4765 generic.go:334] "Generic (PLEG): container finished" podID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" containerID="09a428ff46033ce501906e750e6236dc44cdb0108c01d07a710beb422a231ab9" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.288973 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k92tm" event={"ID":"aa9f4500-9c6f-4415-bea7-eebfda74d3ee","Type":"ContainerDied","Data":"09a428ff46033ce501906e750e6236dc44cdb0108c01d07a710beb422a231ab9"} Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.289520 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.289881 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.290936 4765 generic.go:334] "Generic (PLEG): container finished" podID="ac77c762-18bd-4150-8829-a1a3c85759df" containerID="bbbd85693a7af5c2a74c6287b2892ae6a95fea2a17aa3845ce1ff49c5ecc0db1" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.290997 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac77c762-18bd-4150-8829-a1a3c85759df","Type":"ContainerDied","Data":"bbbd85693a7af5c2a74c6287b2892ae6a95fea2a17aa3845ce1ff49c5ecc0db1"} Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.291398 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.291748 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.291946 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.293516 4765 generic.go:334] "Generic (PLEG): container finished" podID="80d51b27-d825-4e91-81bd-8e3297c4f550" containerID="a232439d2a49b07c17de2c600c9fc79d2e1e41fcc75421d40d405358b0a8c927" exitCode=0 Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.293549 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrfh" event={"ID":"80d51b27-d825-4e91-81bd-8e3297c4f550","Type":"ContainerDied","Data":"a232439d2a49b07c17de2c600c9fc79d2e1e41fcc75421d40d405358b0a8c927"} Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.297740 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.298128 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.298290 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:20 crc kubenswrapper[4765]: I1203 20:42:20.298492 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.300616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k92tm" event={"ID":"aa9f4500-9c6f-4415-bea7-eebfda74d3ee","Type":"ContainerStarted","Data":"66ddc225f853cffb10d05c6e51b34539c0d0d95643889c95aa3285ccee99d4e2"} Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.301226 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.301650 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.301940 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.302239 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.302757 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrfh" event={"ID":"80d51b27-d825-4e91-81bd-8e3297c4f550","Type":"ContainerStarted","Data":"7646f16889aa2b6039cee4d072f62e2390d56753a177f2e7bc7e08f892660b61"} Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.303128 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.303406 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.303642 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.303874 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.305243 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.629480 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.630539 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.630896 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.631232 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.635345 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717020 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock\") pod \"ac77c762-18bd-4150-8829-a1a3c85759df\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717287 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock" (OuterVolumeSpecName: "var-lock") pod "ac77c762-18bd-4150-8829-a1a3c85759df" (UID: "ac77c762-18bd-4150-8829-a1a3c85759df"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717342 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir\") pod \"ac77c762-18bd-4150-8829-a1a3c85759df\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717463 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access\") pod \"ac77c762-18bd-4150-8829-a1a3c85759df\" (UID: \"ac77c762-18bd-4150-8829-a1a3c85759df\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717463 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ac77c762-18bd-4150-8829-a1a3c85759df" (UID: "ac77c762-18bd-4150-8829-a1a3c85759df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717948 4765 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.717969 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac77c762-18bd-4150-8829-a1a3c85759df-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.722095 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ac77c762-18bd-4150-8829-a1a3c85759df" (UID: "ac77c762-18bd-4150-8829-a1a3c85759df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.760502 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.761354 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.762066 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.762594 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.763051 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.763420 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.763921 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819269 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819386 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819384 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819423 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819456 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819641 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819714 4765 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819726 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac77c762-18bd-4150-8829-a1a3c85759df-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819736 4765 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:21 crc kubenswrapper[4765]: I1203 20:42:21.819743 4765 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.319787 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.320720 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d" exitCode=0 Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.320809 4765 scope.go:117] "RemoveContainer" containerID="3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.320976 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.325581 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac77c762-18bd-4150-8829-a1a3c85759df","Type":"ContainerDied","Data":"868d4e0da4092f324a251cd2b26a456232af43eb459fd88cae5f9e416cb38fef"} Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.325619 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.325648 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868d4e0da4092f324a251cd2b26a456232af43eb459fd88cae5f9e416cb38fef" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.363507 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.364185 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.364710 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.365352 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.365980 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:22 crc kubenswrapper[4765]: I1203 20:42:22.373290 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 20:42:22 crc kubenswrapper[4765]: E1203 20:42:22.398812 4765 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" volumeName="registry-storage" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.357735 4765 scope.go:117] "RemoveContainer" containerID="007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.371033 4765 scope.go:117] "RemoveContainer" containerID="39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.387104 4765 scope.go:117] "RemoveContainer" containerID="f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.400042 4765 scope.go:117] "RemoveContainer" containerID="80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.423161 4765 scope.go:117] "RemoveContainer" containerID="89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.439008 4765 scope.go:117] "RemoveContainer" containerID="3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.439537 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\": container with ID starting with 3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432 not found: ID does not exist" containerID="3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.439579 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432"} err="failed to get container status \"3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\": rpc error: code = NotFound desc = could not find container \"3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432\": container with ID starting with 3e7192c5bedf2ce72f89541ec45498c897ce26c8e666988ea8bb4db2c5e6e432 not found: ID does not exist" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.439608 4765 scope.go:117] "RemoveContainer" containerID="007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.439894 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\": container with ID starting with 007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0 not found: ID does not exist" containerID="007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.439947 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0"} err="failed to get container status \"007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\": rpc error: code = NotFound desc = could not find container \"007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0\": container with ID starting with 007cdd60da43a60cc5781836eb60c8a7605b1e2f64f7a8bcecf72e552834bbe0 not found: ID does not exist" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.439982 4765 scope.go:117] "RemoveContainer" containerID="39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.440551 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\": container with ID starting with 39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467 not found: ID does not exist" containerID="39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.440611 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467"} err="failed to get container status \"39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\": rpc error: code = NotFound desc = could not find container \"39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467\": container with ID starting with 39c4008f9741ac99fc37ac011daa7935ea2317f8bedb218455af50929408d467 not found: ID does not exist" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.440639 4765 scope.go:117] "RemoveContainer" containerID="f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.440895 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\": container with ID starting with f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d not found: ID does not exist" containerID="f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.440927 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d"} err="failed to get container status \"f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\": rpc error: code = NotFound desc = could not find container \"f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d\": container with ID starting with f70471fa9a24858bd9cf41a238dc93c16254204917f65ad94fcecabe6da4326d not found: ID does not exist" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.440946 4765 scope.go:117] "RemoveContainer" containerID="80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.441220 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\": container with ID starting with 80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d not found: ID does not exist" containerID="80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.441246 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d"} err="failed to get container status \"80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\": rpc error: code = NotFound desc = could not find container \"80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d\": container with ID starting with 80f69a059907a7306e87e94e87f6e5c7148eecd48ea4a170216b2d468521e87d not found: ID does not exist" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.441262 4765 scope.go:117] "RemoveContainer" containerID="89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9" Dec 03 20:42:23 crc kubenswrapper[4765]: E1203 20:42:23.441958 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\": container with ID starting with 89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9 not found: ID does not exist" containerID="89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9" Dec 03 20:42:23 crc kubenswrapper[4765]: I1203 20:42:23.441983 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9"} err="failed to get container status \"89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\": rpc error: code = NotFound desc = could not find container \"89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9\": container with ID starting with 89905ff8b15d036dd5936c95fad10698bc895822915c0d9a372a9ff62b1aa0e9 not found: ID does not exist" Dec 03 20:42:24 crc kubenswrapper[4765]: E1203 20:42:24.438822 4765 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:24 crc kubenswrapper[4765]: I1203 20:42:24.439401 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:24 crc kubenswrapper[4765]: W1203 20:42:24.468481 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a7af44be0824aede494271c558e90dab960704bb25dafbd953a9bcdf0638dd54 WatchSource:0}: Error finding container a7af44be0824aede494271c558e90dab960704bb25dafbd953a9bcdf0638dd54: Status 404 returned error can't find the container with id a7af44be0824aede494271c558e90dab960704bb25dafbd953a9bcdf0638dd54 Dec 03 20:42:25 crc kubenswrapper[4765]: I1203 20:42:25.344533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a7af44be0824aede494271c558e90dab960704bb25dafbd953a9bcdf0638dd54"} Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.196229 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.196605 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.246854 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.247404 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.247694 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.248113 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.248458 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.351548 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04"} Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.353503 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.354131 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.354730 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.355101 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: E1203 20:42:26.355154 4765 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.406776 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftrfh" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.407429 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.407782 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.408333 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:26 crc kubenswrapper[4765]: I1203 20:42:26.408842 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:27 crc kubenswrapper[4765]: E1203 20:42:27.358677 4765 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.588597 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.588813 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.654960 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.655711 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.656552 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.657017 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:27 crc kubenswrapper[4765]: I1203 20:42:27.657481 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.225156 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-r6rmn.187dcf489555f40f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-r6rmn,UID:1feb87dd-af7d-4048-bf1c-df1541bb8301,APIVersion:v1,ResourceVersion:29394,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 395ms (395ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,LastTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.406673 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.407418 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.409206 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.409647 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.409953 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.588780 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.588831 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.660215 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.660961 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.661434 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.661930 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.662319 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.727427 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.727956 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.728203 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.728510 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.728830 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:28 crc kubenswrapper[4765]: I1203 20:42:28.728876 4765 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.729121 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="200ms" Dec 03 20:42:28 crc kubenswrapper[4765]: E1203 20:42:28.930440 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="400ms" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.216670 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.217031 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.217677 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.217888 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.218128 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.218147 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:42:29 crc kubenswrapper[4765]: E1203 20:42:29.331639 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="800ms" Dec 03 20:42:29 crc kubenswrapper[4765]: I1203 20:42:29.416604 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k92tm" Dec 03 20:42:29 crc kubenswrapper[4765]: I1203 20:42:29.417412 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: I1203 20:42:29.417882 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: I1203 20:42:29.418419 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:29 crc kubenswrapper[4765]: I1203 20:42:29.418651 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:30 crc kubenswrapper[4765]: E1203 20:42:30.132687 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="1.6s" Dec 03 20:42:31 crc kubenswrapper[4765]: E1203 20:42:31.734119 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="3.2s" Dec 03 20:42:32 crc kubenswrapper[4765]: I1203 20:42:32.362361 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:32 crc kubenswrapper[4765]: I1203 20:42:32.362743 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:32 crc kubenswrapper[4765]: I1203 20:42:32.363247 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:32 crc kubenswrapper[4765]: I1203 20:42:32.363565 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:34 crc kubenswrapper[4765]: E1203 20:42:34.935690 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="6.4s" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.402794 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.402852 4765 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba" exitCode=1 Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.402885 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba"} Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.403348 4765 scope.go:117] "RemoveContainer" containerID="660b427784f2e32d7e5e864ea988d4a85c9c20dc6920d4db42b9934545f054ba" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.403544 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.403849 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.404130 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.404465 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:35 crc kubenswrapper[4765]: I1203 20:42:35.404633 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.314919 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.410985 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.411057 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73281cd4b0800ef0b5ff14be57db94822279a25f746a38eec95976b4d24a3795"} Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.411938 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.412212 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.412546 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.413032 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:36 crc kubenswrapper[4765]: I1203 20:42:36.413597 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:37 crc kubenswrapper[4765]: I1203 20:42:37.808018 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:42:38 crc kubenswrapper[4765]: E1203 20:42:38.226764 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-r6rmn.187dcf489555f40f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-r6rmn,UID:1feb87dd-af7d-4048-bf1c-df1541bb8301,APIVersion:v1,ResourceVersion:29394,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 395ms (395ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,LastTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 20:42:38 crc kubenswrapper[4765]: I1203 20:42:38.834865 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" containerID="cri-o://b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60" gracePeriod=15 Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.167421 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.167996 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.168407 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.168683 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.168954 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.169168 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.169405 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.268978 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f78mn\" (UniqueName: \"kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.269034 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270070 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270124 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270193 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270219 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270241 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270264 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270396 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270439 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.270490 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle\") pod \"70f44f4f-8e44-460a-9696-5af11fc75a95\" (UID: \"70f44f4f-8e44-460a-9696-5af11fc75a95\") " Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.271498 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.271631 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.271937 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.271997 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.272511 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.274628 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.274885 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.274962 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn" (OuterVolumeSpecName: "kube-api-access-f78mn") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "kube-api-access-f78mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.275149 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.275165 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.275683 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.275830 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.276028 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.276231 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "70f44f4f-8e44-460a-9696-5af11fc75a95" (UID: "70f44f4f-8e44-460a-9696-5af11fc75a95"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.371648 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372113 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f78mn\" (UniqueName: \"kubernetes.io/projected/70f44f4f-8e44-460a-9696-5af11fc75a95-kube-api-access-f78mn\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372380 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372572 4765 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372709 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372835 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.372974 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373105 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373275 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373484 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373660 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373798 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.373954 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.374100 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/70f44f4f-8e44-460a-9696-5af11fc75a95-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.430065 4765 generic.go:334] "Generic (PLEG): container finished" podID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerID="b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60" exitCode=0 Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.430120 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" event={"ID":"70f44f4f-8e44-460a-9696-5af11fc75a95","Type":"ContainerDied","Data":"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60"} Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.430155 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" event={"ID":"70f44f4f-8e44-460a-9696-5af11fc75a95","Type":"ContainerDied","Data":"15517a6fa097ba4862bbe297166a812e25a2b04a2bf5a03da2d70d7e6ec8be27"} Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.430180 4765 scope.go:117] "RemoveContainer" containerID="b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.430865 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.432161 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.432686 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.433030 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.433261 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.433833 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.434428 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.453062 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.453560 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.454258 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.454733 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.455248 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.455816 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.465188 4765 scope.go:117] "RemoveContainer" containerID="b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.465820 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60\": container with ID starting with b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60 not found: ID does not exist" containerID="b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60" Dec 03 20:42:39 crc kubenswrapper[4765]: I1203 20:42:39.465857 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60"} err="failed to get container status \"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60\": rpc error: code = NotFound desc = could not find container \"b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60\": container with ID starting with b1ce41c26465e3e5c20a1d24500abc00050a37dc0d99e7b58b6373a42ccc1a60 not found: ID does not exist" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.610812 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.611431 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.611889 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.612356 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.612779 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:39 crc kubenswrapper[4765]: E1203 20:42:39.612816 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:42:41 crc kubenswrapper[4765]: E1203 20:42:41.337792 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="7s" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.364185 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.365071 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.365550 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.365955 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.366371 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:42 crc kubenswrapper[4765]: I1203 20:42:42.366733 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.314855 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.319860 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.320591 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.320842 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.321113 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.321373 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.321740 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.322261 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.477612 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.478222 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.478681 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.479081 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.483852 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.484449 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:46 crc kubenswrapper[4765]: I1203 20:42:46.491520 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:48 crc kubenswrapper[4765]: E1203 20:42:48.228224 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.65:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-r6rmn.187dcf489555f40f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-r6rmn,UID:1feb87dd-af7d-4048-bf1c-df1541bb8301,APIVersion:v1,ResourceVersion:29394,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 395ms (395ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,LastTimestamp:2025-12-03 20:42:19.649135631 +0000 UTC m=+237.579680782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 20:42:48 crc kubenswrapper[4765]: E1203 20:42:48.339204 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="7s" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.785513 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T20:42:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.785957 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.786211 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.786445 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.786683 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:49 crc kubenswrapper[4765]: E1203 20:42:49.786696 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.362845 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.363759 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.364468 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.364929 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.365127 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.365392 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.519852 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.520285 4765 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27" exitCode=1 Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.520348 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27"} Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.520851 4765 scope.go:117] "RemoveContainer" containerID="54b805efc2b7937443c2f01571c0e866b9c6625a186d292f9d059bd56c794f27" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.521742 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.522445 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.523030 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.523576 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.524072 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.524600 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:52 crc kubenswrapper[4765]: I1203 20:42:52.525070 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.350017 4765 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podf4b27818a5e8e43d0dc095d08835c792"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podf4b27818a5e8e43d0dc095d08835c792] : Timed out while waiting for systemd to remove kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice" Dec 03 20:42:53 crc kubenswrapper[4765]: E1203 20:42:53.350422 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podf4b27818a5e8e43d0dc095d08835c792] : unable to destroy cgroup paths for cgroup [kubepods burstable podf4b27818a5e8e43d0dc095d08835c792] : Timed out while waiting for systemd to remove kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.352278 4765 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","podac77c762-18bd-4150-8829-a1a3c85759df"] err="unable to destroy cgroup paths for cgroup [kubepods podac77c762-18bd-4150-8829-a1a3c85759df] : Timed out while waiting for systemd to remove kubepods-podac77c762_18bd_4150_8829_a1a3c85759df.slice" Dec 03 20:42:53 crc kubenswrapper[4765]: E1203 20:42:53.352363 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods podac77c762-18bd-4150-8829-a1a3c85759df] : unable to destroy cgroup paths for cgroup [kubepods podac77c762-18bd-4150-8829-a1a3c85759df] : Timed out while waiting for systemd to remove kubepods-podac77c762_18bd_4150_8829_a1a3c85759df.slice" pod="openshift-kube-apiserver/installer-9-crc" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.529365 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.530030 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"94c8246c41abfc35f5a2e6482bacb3da55426263dabe6e6c02ad64ca7d032c2b"} Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.530086 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.530113 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.531065 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.531694 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.532087 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.532466 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.532949 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.533203 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.533538 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.533863 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.534087 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.534513 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.534808 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.535113 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.535399 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.535630 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.535960 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.536244 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.536494 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.536678 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.536956 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.537289 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.537518 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.537778 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:53 crc kubenswrapper[4765]: I1203 20:42:53.538109 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: E1203 20:42:55.340102 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.65:6443: connect: connection refused" interval="7s" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.359791 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.361020 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.361573 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.361871 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.362158 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.362628 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.362883 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.363187 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.373110 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.373151 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:42:55 crc kubenswrapper[4765]: E1203 20:42:55.373685 4765 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.374238 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:55 crc kubenswrapper[4765]: W1203 20:42:55.398114 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4c6226a88c8e394542ad9b1e6d48688ce0cd7c60d50f8a98d45018d7f3cd06b5 WatchSource:0}: Error finding container 4c6226a88c8e394542ad9b1e6d48688ce0cd7c60d50f8a98d45018d7f3cd06b5: Status 404 returned error can't find the container with id 4c6226a88c8e394542ad9b1e6d48688ce0cd7c60d50f8a98d45018d7f3cd06b5 Dec 03 20:42:55 crc kubenswrapper[4765]: I1203 20:42:55.541185 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c6226a88c8e394542ad9b1e6d48688ce0cd7c60d50f8a98d45018d7f3cd06b5"} Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.548457 4765 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d49ab71fb15861b0a93ec770c6ccbf1b69e5264e034a9c6320188a9fe4f3296b" exitCode=0 Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.548553 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d49ab71fb15861b0a93ec770c6ccbf1b69e5264e034a9c6320188a9fe4f3296b"} Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.548908 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.548939 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:42:56 crc kubenswrapper[4765]: E1203 20:42:56.549445 4765 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.549466 4765 status_manager.go:851] "Failed to get status for pod" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.550042 4765 status_manager.go:851] "Failed to get status for pod" podUID="aa9f4500-9c6f-4415-bea7-eebfda74d3ee" pod="openshift-marketplace/redhat-marketplace-k92tm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-k92tm\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.550557 4765 status_manager.go:851] "Failed to get status for pod" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" pod="openshift-authentication/oauth-openshift-558db77b4-2hvlc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2hvlc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.551251 4765 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.551984 4765 status_manager.go:851] "Failed to get status for pod" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" pod="openshift-marketplace/community-operators-r6rmn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r6rmn\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.552273 4765 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:56 crc kubenswrapper[4765]: I1203 20:42:56.552881 4765 status_manager.go:851] "Failed to get status for pod" podUID="80d51b27-d825-4e91-81bd-8e3297c4f550" pod="openshift-marketplace/redhat-operators-ftrfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ftrfh\": dial tcp 38.102.83.65:6443: connect: connection refused" Dec 03 20:42:57 crc kubenswrapper[4765]: I1203 20:42:57.560336 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"320d080a11ce23cb1a621a7c09c1a170a036c4111b5dafee73de84059e4a675a"} Dec 03 20:42:57 crc kubenswrapper[4765]: I1203 20:42:57.560591 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca0cdc1c237fef47d1b97e2ece62b34c3cd7b00fb90b5e41ac122477d29e7dd2"} Dec 03 20:42:57 crc kubenswrapper[4765]: I1203 20:42:57.560604 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"130b415c40c0fbcb57e35dd820b3fd1640a701c6902a0d2baf3dba84b49c039f"} Dec 03 20:42:57 crc kubenswrapper[4765]: I1203 20:42:57.560612 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"043d987a38e763d51c852e9193f94de4871d99dba7753d203893e116f2736059"} Dec 03 20:42:58 crc kubenswrapper[4765]: I1203 20:42:58.567573 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad589f101dcaae2a68adc43c26cf4f02680a66fc90b7eff49de727fb00b995f8"} Dec 03 20:42:58 crc kubenswrapper[4765]: I1203 20:42:58.567875 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:42:58 crc kubenswrapper[4765]: I1203 20:42:58.567896 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:42:58 crc kubenswrapper[4765]: I1203 20:42:58.567924 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:00 crc kubenswrapper[4765]: I1203 20:43:00.375426 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:00 crc kubenswrapper[4765]: I1203 20:43:00.375481 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:00 crc kubenswrapper[4765]: I1203 20:43:00.382268 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:03 crc kubenswrapper[4765]: I1203 20:43:03.576914 4765 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:03 crc kubenswrapper[4765]: I1203 20:43:03.604887 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:03 crc kubenswrapper[4765]: I1203 20:43:03.605110 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:03 crc kubenswrapper[4765]: I1203 20:43:03.610775 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:03 crc kubenswrapper[4765]: I1203 20:43:03.737009 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="73b5a9c4-cece-46c1-bf7a-99383912828e" Dec 03 20:43:04 crc kubenswrapper[4765]: I1203 20:43:04.609198 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:04 crc kubenswrapper[4765]: I1203 20:43:04.609779 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:04 crc kubenswrapper[4765]: I1203 20:43:04.611744 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="73b5a9c4-cece-46c1-bf7a-99383912828e" Dec 03 20:43:22 crc kubenswrapper[4765]: I1203 20:43:22.228563 4765 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 03 20:43:26 crc kubenswrapper[4765]: I1203 20:43:26.985554 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 20:43:27 crc kubenswrapper[4765]: I1203 20:43:27.954013 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 20:43:28 crc kubenswrapper[4765]: I1203 20:43:28.662177 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.002592 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.435854 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.549660 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.686326 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.967870 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 20:43:29 crc kubenswrapper[4765]: I1203 20:43:29.976139 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 20:43:30 crc kubenswrapper[4765]: I1203 20:43:30.026112 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 20:43:30 crc kubenswrapper[4765]: I1203 20:43:30.193874 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 20:43:30 crc kubenswrapper[4765]: I1203 20:43:30.425878 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 20:43:30 crc kubenswrapper[4765]: I1203 20:43:30.483955 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.053996 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.057358 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.408928 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.434410 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.441240 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.506869 4765 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.647348 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.831183 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.836525 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.891134 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 20:43:31 crc kubenswrapper[4765]: I1203 20:43:31.930204 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 20:43:32 crc kubenswrapper[4765]: I1203 20:43:32.102452 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 20:43:32 crc kubenswrapper[4765]: I1203 20:43:32.143701 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 20:43:32 crc kubenswrapper[4765]: I1203 20:43:32.606385 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 20:43:32 crc kubenswrapper[4765]: I1203 20:43:32.640889 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 20:43:33 crc kubenswrapper[4765]: I1203 20:43:33.037031 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 20:43:33 crc kubenswrapper[4765]: I1203 20:43:33.356378 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 20:43:33 crc kubenswrapper[4765]: I1203 20:43:33.445649 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 20:43:33 crc kubenswrapper[4765]: I1203 20:43:33.735251 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 20:43:33 crc kubenswrapper[4765]: I1203 20:43:33.740575 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.306074 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.327850 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.330183 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.456826 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.528488 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.662258 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.714874 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.731738 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.791895 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 20:43:34 crc kubenswrapper[4765]: I1203 20:43:34.899958 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 20:43:35 crc kubenswrapper[4765]: I1203 20:43:35.048546 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 20:43:35 crc kubenswrapper[4765]: I1203 20:43:35.796129 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 20:43:35 crc kubenswrapper[4765]: I1203 20:43:35.800088 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.112923 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.215751 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.587789 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.694762 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.859600 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 20:43:36 crc kubenswrapper[4765]: I1203 20:43:36.900854 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 20:43:37 crc kubenswrapper[4765]: I1203 20:43:37.126153 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 20:43:37 crc kubenswrapper[4765]: I1203 20:43:37.462967 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 20:43:37 crc kubenswrapper[4765]: I1203 20:43:37.560137 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 20:43:37 crc kubenswrapper[4765]: I1203 20:43:37.943029 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.004490 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.091167 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.137122 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.229799 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.314756 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.454697 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.536091 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.622053 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.655126 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.719395 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.759909 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 20:43:38 crc kubenswrapper[4765]: I1203 20:43:38.983131 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 20:43:39 crc kubenswrapper[4765]: I1203 20:43:39.138919 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 20:43:39 crc kubenswrapper[4765]: I1203 20:43:39.224170 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 20:43:39 crc kubenswrapper[4765]: I1203 20:43:39.539840 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 20:43:39 crc kubenswrapper[4765]: I1203 20:43:39.695795 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 20:43:39 crc kubenswrapper[4765]: I1203 20:43:39.771509 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 20:43:40 crc kubenswrapper[4765]: I1203 20:43:40.721712 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 20:43:40 crc kubenswrapper[4765]: I1203 20:43:40.788863 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 20:43:40 crc kubenswrapper[4765]: I1203 20:43:40.798873 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.260255 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.328766 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.478114 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.612911 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.648647 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 20:43:41 crc kubenswrapper[4765]: I1203 20:43:41.916579 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.110912 4765 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.211781 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.286435 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.463161 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.471938 4765 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.567664 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.581617 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.822426 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.918924 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 20:43:42 crc kubenswrapper[4765]: I1203 20:43:42.973467 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 20:43:43 crc kubenswrapper[4765]: I1203 20:43:43.289519 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 20:43:43 crc kubenswrapper[4765]: I1203 20:43:43.781806 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 20:43:43 crc kubenswrapper[4765]: I1203 20:43:43.906014 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 20:43:44 crc kubenswrapper[4765]: I1203 20:43:44.187627 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 20:43:44 crc kubenswrapper[4765]: I1203 20:43:44.252371 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 20:43:44 crc kubenswrapper[4765]: I1203 20:43:44.401601 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 20:43:44 crc kubenswrapper[4765]: I1203 20:43:44.445695 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 20:43:44 crc kubenswrapper[4765]: I1203 20:43:44.626888 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.672198 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.743408 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.858835 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.865392 4765 generic.go:334] "Generic (PLEG): container finished" podID="72a5b180-7b23-4bfd-a10b-c35f73c732aa" containerID="2e6630748cc2fc47c7e9d290edb39c8aeb87eee8f15c4b0e09e6c9a87aff06b3" exitCode=0 Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.865434 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" event={"ID":"72a5b180-7b23-4bfd-a10b-c35f73c732aa","Type":"ContainerDied","Data":"2e6630748cc2fc47c7e9d290edb39c8aeb87eee8f15c4b0e09e6c9a87aff06b3"} Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:44.865869 4765 scope.go:117] "RemoveContainer" containerID="2e6630748cc2fc47c7e9d290edb39c8aeb87eee8f15c4b0e09e6c9a87aff06b3" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.248463 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.382754 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.392188 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.529217 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.841099 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.872597 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" event={"ID":"72a5b180-7b23-4bfd-a10b-c35f73c732aa","Type":"ContainerStarted","Data":"93d5e0bae3fa118f00292436b15b0938c3a91d3f46db705d58b5755554c53c04"} Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.873106 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:43:45 crc kubenswrapper[4765]: I1203 20:43:45.875971 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ztctx" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.101239 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.218932 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.268605 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.405848 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.483856 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 20:43:46 crc kubenswrapper[4765]: I1203 20:43:46.973570 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 20:43:47 crc kubenswrapper[4765]: I1203 20:43:47.046801 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 20:43:47 crc kubenswrapper[4765]: I1203 20:43:47.215483 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 20:43:47 crc kubenswrapper[4765]: I1203 20:43:47.708606 4765 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 20:43:47 crc kubenswrapper[4765]: I1203 20:43:47.747672 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 20:43:47 crc kubenswrapper[4765]: I1203 20:43:47.984396 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 20:43:48 crc kubenswrapper[4765]: I1203 20:43:48.023688 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 20:43:48 crc kubenswrapper[4765]: I1203 20:43:48.045921 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 20:43:48 crc kubenswrapper[4765]: I1203 20:43:48.319388 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 20:43:48 crc kubenswrapper[4765]: I1203 20:43:48.344584 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.039382 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.076220 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.100610 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.109875 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.209790 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.306274 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.372517 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.494691 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.601465 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 20:43:49 crc kubenswrapper[4765]: I1203 20:43:49.722988 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 20:43:50 crc kubenswrapper[4765]: I1203 20:43:50.100494 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 20:43:50 crc kubenswrapper[4765]: I1203 20:43:50.231050 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 20:43:50 crc kubenswrapper[4765]: I1203 20:43:50.254265 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 20:43:50 crc kubenswrapper[4765]: I1203 20:43:50.975486 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.202431 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.219805 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.586169 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.766646 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.802286 4765 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.804668 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k92tm" podStartSLOduration=91.884476359 podStartE2EDuration="1m33.804640575s" podCreationTimestamp="2025-12-03 20:42:18 +0000 UTC" firstStartedPulling="2025-12-03 20:42:19.263023293 +0000 UTC m=+237.193568434" lastFinishedPulling="2025-12-03 20:42:21.183187499 +0000 UTC m=+239.113732650" observedRunningTime="2025-12-03 20:43:03.665926917 +0000 UTC m=+281.596472088" watchObservedRunningTime="2025-12-03 20:43:51.804640575 +0000 UTC m=+329.735185796" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.804854 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftrfh" podStartSLOduration=94.359651067 podStartE2EDuration="1m36.80484491s" podCreationTimestamp="2025-12-03 20:42:15 +0000 UTC" firstStartedPulling="2025-12-03 20:42:18.243886399 +0000 UTC m=+236.174431550" lastFinishedPulling="2025-12-03 20:42:20.689080242 +0000 UTC m=+238.619625393" observedRunningTime="2025-12-03 20:43:03.636833221 +0000 UTC m=+281.567378412" watchObservedRunningTime="2025-12-03 20:43:51.80484491 +0000 UTC m=+329.735390101" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.808073 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r6rmn" podStartSLOduration=93.406888795 podStartE2EDuration="1m34.808062135s" podCreationTimestamp="2025-12-03 20:42:17 +0000 UTC" firstStartedPulling="2025-12-03 20:42:18.247950831 +0000 UTC m=+236.178495982" lastFinishedPulling="2025-12-03 20:42:19.649124171 +0000 UTC m=+237.579669322" observedRunningTime="2025-12-03 20:43:03.60356571 +0000 UTC m=+281.534110861" watchObservedRunningTime="2025-12-03 20:43:51.808062135 +0000 UTC m=+329.738607326" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.810787 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2hvlc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.810868 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-gpw6l","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 20:43:51 crc kubenswrapper[4765]: E1203 20:43:51.811133 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" containerName="installer" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811160 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" containerName="installer" Dec 03 20:43:51 crc kubenswrapper[4765]: E1203 20:43:51.811206 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811223 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811285 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811348 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a1b459a3-4f56-4c3a-b8ff-ecaeabe9fb69" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811439 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac77c762-18bd-4150-8829-a1a3c85759df" containerName="installer" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.811469 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" containerName="oauth-openshift" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.812525 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.815202 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.816048 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.816263 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.816711 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.820123 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.821935 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.822118 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.822554 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.823388 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.823569 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.823631 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.824713 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.828782 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.837335 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.839243 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.845215 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.869033 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=14.86900788 podStartE2EDuration="14.86900788s" podCreationTimestamp="2025-12-03 20:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:43:51.849519895 +0000 UTC m=+329.780065086" watchObservedRunningTime="2025-12-03 20:43:51.86900788 +0000 UTC m=+329.799553121" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.878860 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=48.878831682 podStartE2EDuration="48.878831682s" podCreationTimestamp="2025-12-03 20:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:43:51.871491846 +0000 UTC m=+329.802037017" watchObservedRunningTime="2025-12-03 20:43:51.878831682 +0000 UTC m=+329.809376873" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.896510 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956240 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956274 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956310 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956340 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-policies\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956378 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956434 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956463 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2rx\" (UniqueName: \"kubernetes.io/projected/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-kube-api-access-nj2rx\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-dir\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.956540 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:51 crc kubenswrapper[4765]: I1203 20:43:51.966070 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.057438 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.057503 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-policies\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.058445 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-policies\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059092 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059134 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059228 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059349 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059474 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2rx\" (UniqueName: \"kubernetes.io/projected/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-kube-api-access-nj2rx\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059516 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059589 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-dir\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059642 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059671 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059759 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.059902 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.060235 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.060290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-audit-dir\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.060721 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.065987 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.066010 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.066239 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.066409 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.066511 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.067196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.067219 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.067975 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.082768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2rx\" (UniqueName: \"kubernetes.io/projected/bb93ba14-95cc-4a8c-a972-cea4e3eb77a8-kube-api-access-nj2rx\") pod \"oauth-openshift-cc7989dc6-gpw6l\" (UID: \"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.131051 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.137439 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:43:52 crc kubenswrapper[4765]: I1203 20:43:52.365773 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f44f4f-8e44-460a-9696-5af11fc75a95" path="/var/lib/kubelet/pods/70f44f4f-8e44-460a-9696-5af11fc75a95/volumes" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.008829 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.041422 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.198697 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.381550 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.391661 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.465698 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 20:43:53 crc kubenswrapper[4765]: I1203 20:43:53.674482 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.108872 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.141745 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.254668 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.369661 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.388247 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.399962 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.636507 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 20:43:54 crc kubenswrapper[4765]: I1203 20:43:54.701905 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 20:43:55 crc kubenswrapper[4765]: I1203 20:43:55.762919 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 20:43:56 crc kubenswrapper[4765]: I1203 20:43:56.667745 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 20:43:57 crc kubenswrapper[4765]: I1203 20:43:57.203218 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 20:43:58 crc kubenswrapper[4765]: I1203 20:43:58.195719 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 20:44:00 crc kubenswrapper[4765]: I1203 20:44:00.021938 4765 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 20:44:00 crc kubenswrapper[4765]: I1203 20:44:00.022190 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04" gracePeriod=5 Dec 03 20:44:00 crc kubenswrapper[4765]: I1203 20:44:00.570072 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 20:44:00 crc kubenswrapper[4765]: I1203 20:44:00.822638 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 20:44:01 crc kubenswrapper[4765]: I1203 20:44:01.372772 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-gpw6l"] Dec 03 20:44:01 crc kubenswrapper[4765]: I1203 20:44:01.576015 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-gpw6l"] Dec 03 20:44:01 crc kubenswrapper[4765]: W1203 20:44:01.591402 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb93ba14_95cc_4a8c_a972_cea4e3eb77a8.slice/crio-c8cdc95192baa71dc551adbfce8510a584e711ca56664618ea9632b2154db83d WatchSource:0}: Error finding container c8cdc95192baa71dc551adbfce8510a584e711ca56664618ea9632b2154db83d: Status 404 returned error can't find the container with id c8cdc95192baa71dc551adbfce8510a584e711ca56664618ea9632b2154db83d Dec 03 20:44:01 crc kubenswrapper[4765]: I1203 20:44:01.968503 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" event={"ID":"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8","Type":"ContainerStarted","Data":"66d68428b67768ae4e4a3fa5577cbf3c29c4da828b917a9c5fdacf9fd5704e7b"} Dec 03 20:44:01 crc kubenswrapper[4765]: I1203 20:44:01.968862 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:44:01 crc kubenswrapper[4765]: I1203 20:44:01.969005 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" event={"ID":"bb93ba14-95cc-4a8c-a972-cea4e3eb77a8","Type":"ContainerStarted","Data":"c8cdc95192baa71dc551adbfce8510a584e711ca56664618ea9632b2154db83d"} Dec 03 20:44:02 crc kubenswrapper[4765]: I1203 20:44:02.225961 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" Dec 03 20:44:02 crc kubenswrapper[4765]: I1203 20:44:02.254018 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cc7989dc6-gpw6l" podStartSLOduration=109.253999245 podStartE2EDuration="1m49.253999245s" podCreationTimestamp="2025-12-03 20:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:44:01.998950651 +0000 UTC m=+339.929495802" watchObservedRunningTime="2025-12-03 20:44:02.253999245 +0000 UTC m=+340.184544396" Dec 03 20:44:02 crc kubenswrapper[4765]: I1203 20:44:02.672154 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 20:44:02 crc kubenswrapper[4765]: I1203 20:44:02.992855 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 20:44:03 crc kubenswrapper[4765]: I1203 20:44:03.415745 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 20:44:03 crc kubenswrapper[4765]: I1203 20:44:03.531605 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.604902 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.604969 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660229 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660321 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660354 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660453 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660482 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660496 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660566 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660631 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660810 4765 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660822 4765 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660830 4765 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.660838 4765 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.671959 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.762124 4765 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.996954 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.997042 4765 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04" exitCode=137 Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.997109 4765 scope.go:117] "RemoveContainer" containerID="621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.997432 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.999148 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 20:44:05 crc kubenswrapper[4765]: I1203 20:44:05.999163 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.029019 4765 scope.go:117] "RemoveContainer" containerID="621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04" Dec 03 20:44:06 crc kubenswrapper[4765]: E1203 20:44:06.029563 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04\": container with ID starting with 621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04 not found: ID does not exist" containerID="621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.029610 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04"} err="failed to get container status \"621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04\": rpc error: code = NotFound desc = could not find container \"621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04\": container with ID starting with 621043279ec9c2640ffc64d5696a7e581207889dd634346cffc0bc4dd43d0f04 not found: ID does not exist" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.085190 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 20:44:06 crc kubenswrapper[4765]: E1203 20:44:06.087022 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice\": RecentStats: unable to find data in memory cache]" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.242474 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.368242 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.368510 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.379073 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.379125 4765 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c0f87141-543b-45d3-a3b3-b030f517538e" Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.382895 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 20:44:06 crc kubenswrapper[4765]: I1203 20:44:06.382938 4765 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c0f87141-543b-45d3-a3b3-b030f517538e" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.403518 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 20:44:07 crc kubenswrapper[4765]: E1203 20:44:07.403809 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.403831 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.404020 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.405510 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.408565 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.413151 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.428619 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.485079 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64g2\" (UniqueName: \"kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.485151 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.485199 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.586873 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64g2\" (UniqueName: \"kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.586954 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.587019 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.588115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.588142 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.622790 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64g2\" (UniqueName: \"kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2\") pod \"certified-operators-jz5ss\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:07 crc kubenswrapper[4765]: I1203 20:44:07.729369 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:08 crc kubenswrapper[4765]: I1203 20:44:08.047514 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 20:44:08 crc kubenswrapper[4765]: I1203 20:44:08.144769 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 20:44:08 crc kubenswrapper[4765]: I1203 20:44:08.977811 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.017376 4765 generic.go:334] "Generic (PLEG): container finished" podID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerID="0adfdccf8e956ceeaf48f57c7be62596152aa5df57bbbbb4424584b5f0075c00" exitCode=0 Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.017449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerDied","Data":"0adfdccf8e956ceeaf48f57c7be62596152aa5df57bbbbb4424584b5f0075c00"} Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.017504 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerStarted","Data":"20f820d8261156654a26cf5d922329aaf3777d5b5c878b09428e23a14fa9954e"} Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.469554 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.618488 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.655264 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 20:44:09 crc kubenswrapper[4765]: I1203 20:44:09.779211 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.024804 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerStarted","Data":"6b24227854b93c8211334a56d6ec672e62c53480f81d7054406d03370e04efdd"} Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.380292 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.380817 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" podUID="29c890bc-a753-4a38-b8d5-33098898333b" containerName="controller-manager" containerID="cri-o://e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2" gracePeriod=30 Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.473803 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.474006 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerName="route-controller-manager" containerID="cri-o://7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695" gracePeriod=30 Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.562704 4765 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4wldp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.562789 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.725127 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.799113 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.804748 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830154 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca\") pod \"29c890bc-a753-4a38-b8d5-33098898333b\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830215 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config\") pod \"29c890bc-a753-4a38-b8d5-33098898333b\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830283 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles\") pod \"29c890bc-a753-4a38-b8d5-33098898333b\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830335 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsf69\" (UniqueName: \"kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69\") pod \"29c890bc-a753-4a38-b8d5-33098898333b\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830358 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert\") pod \"29c890bc-a753-4a38-b8d5-33098898333b\" (UID: \"29c890bc-a753-4a38-b8d5-33098898333b\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.830994 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca" (OuterVolumeSpecName: "client-ca") pod "29c890bc-a753-4a38-b8d5-33098898333b" (UID: "29c890bc-a753-4a38-b8d5-33098898333b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.831055 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "29c890bc-a753-4a38-b8d5-33098898333b" (UID: "29c890bc-a753-4a38-b8d5-33098898333b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.832051 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config" (OuterVolumeSpecName: "config") pod "29c890bc-a753-4a38-b8d5-33098898333b" (UID: "29c890bc-a753-4a38-b8d5-33098898333b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.841918 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69" (OuterVolumeSpecName: "kube-api-access-qsf69") pod "29c890bc-a753-4a38-b8d5-33098898333b" (UID: "29c890bc-a753-4a38-b8d5-33098898333b"). InnerVolumeSpecName "kube-api-access-qsf69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.843060 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29c890bc-a753-4a38-b8d5-33098898333b" (UID: "29c890bc-a753-4a38-b8d5-33098898333b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.931767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca\") pod \"c25824b2-7d4e-4fdd-ac80-d2975d802570\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.931888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbnfh\" (UniqueName: \"kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh\") pod \"c25824b2-7d4e-4fdd-ac80-d2975d802570\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.931935 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert\") pod \"c25824b2-7d4e-4fdd-ac80-d2975d802570\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.931993 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config\") pod \"c25824b2-7d4e-4fdd-ac80-d2975d802570\" (UID: \"c25824b2-7d4e-4fdd-ac80-d2975d802570\") " Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.932219 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.932234 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsf69\" (UniqueName: \"kubernetes.io/projected/29c890bc-a753-4a38-b8d5-33098898333b-kube-api-access-qsf69\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.932250 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29c890bc-a753-4a38-b8d5-33098898333b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.932262 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.932273 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c890bc-a753-4a38-b8d5-33098898333b-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.933420 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca" (OuterVolumeSpecName: "client-ca") pod "c25824b2-7d4e-4fdd-ac80-d2975d802570" (UID: "c25824b2-7d4e-4fdd-ac80-d2975d802570"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.933567 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config" (OuterVolumeSpecName: "config") pod "c25824b2-7d4e-4fdd-ac80-d2975d802570" (UID: "c25824b2-7d4e-4fdd-ac80-d2975d802570"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.936086 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c25824b2-7d4e-4fdd-ac80-d2975d802570" (UID: "c25824b2-7d4e-4fdd-ac80-d2975d802570"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:44:10 crc kubenswrapper[4765]: I1203 20:44:10.936321 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh" (OuterVolumeSpecName: "kube-api-access-pbnfh") pod "c25824b2-7d4e-4fdd-ac80-d2975d802570" (UID: "c25824b2-7d4e-4fdd-ac80-d2975d802570"). InnerVolumeSpecName "kube-api-access-pbnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.031613 4765 generic.go:334] "Generic (PLEG): container finished" podID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerID="7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695" exitCode=0 Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.031705 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" event={"ID":"c25824b2-7d4e-4fdd-ac80-d2975d802570","Type":"ContainerDied","Data":"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695"} Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.031712 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.031746 4765 scope.go:117] "RemoveContainer" containerID="7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.031734 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp" event={"ID":"c25824b2-7d4e-4fdd-ac80-d2975d802570","Type":"ContainerDied","Data":"34928517784ad9713adf07bf2d991c330d6fab9b8e1490a902e76975241aa6d4"} Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035033 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035060 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbnfh\" (UniqueName: \"kubernetes.io/projected/c25824b2-7d4e-4fdd-ac80-d2975d802570-kube-api-access-pbnfh\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035071 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c25824b2-7d4e-4fdd-ac80-d2975d802570-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035103 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c25824b2-7d4e-4fdd-ac80-d2975d802570-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035573 4765 generic.go:334] "Generic (PLEG): container finished" podID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerID="6b24227854b93c8211334a56d6ec672e62c53480f81d7054406d03370e04efdd" exitCode=0 Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.035625 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerDied","Data":"6b24227854b93c8211334a56d6ec672e62c53480f81d7054406d03370e04efdd"} Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.037543 4765 generic.go:334] "Generic (PLEG): container finished" podID="29c890bc-a753-4a38-b8d5-33098898333b" containerID="e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2" exitCode=0 Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.037580 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" event={"ID":"29c890bc-a753-4a38-b8d5-33098898333b","Type":"ContainerDied","Data":"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2"} Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.037632 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" event={"ID":"29c890bc-a753-4a38-b8d5-33098898333b","Type":"ContainerDied","Data":"41eac0c5db7818a67877cf4531dd4258ab4fc2017fef796bfcd16a0b5d62c913"} Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.037714 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gj4nd" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.051534 4765 scope.go:117] "RemoveContainer" containerID="7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.051892 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 20:44:11 crc kubenswrapper[4765]: E1203 20:44:11.052977 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695\": container with ID starting with 7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695 not found: ID does not exist" containerID="7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.053005 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695"} err="failed to get container status \"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695\": rpc error: code = NotFound desc = could not find container \"7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695\": container with ID starting with 7277b9601f31a5aeb67a3a2ac0194b07d5e6e6d9787b7176287d8f10eee4b695 not found: ID does not exist" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.053026 4765 scope.go:117] "RemoveContainer" containerID="e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.070229 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.073290 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.078466 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gj4nd"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.083147 4765 scope.go:117] "RemoveContainer" containerID="e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2" Dec 03 20:44:11 crc kubenswrapper[4765]: E1203 20:44:11.084824 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2\": container with ID starting with e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2 not found: ID does not exist" containerID="e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.084926 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2"} err="failed to get container status \"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2\": rpc error: code = NotFound desc = could not find container \"e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2\": container with ID starting with e6723e11d1b094fb39c552959c5e7eb07734d175d80da4bf58eb6c0eb9afa7d2 not found: ID does not exist" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.086623 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.092667 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4wldp"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.192215 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.693122 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:11 crc kubenswrapper[4765]: E1203 20:44:11.693703 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerName="route-controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.693725 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerName="route-controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: E1203 20:44:11.693752 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c890bc-a753-4a38-b8d5-33098898333b" containerName="controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.693764 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c890bc-a753-4a38-b8d5-33098898333b" containerName="controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.693868 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c890bc-a753-4a38-b8d5-33098898333b" containerName="controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.693883 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" containerName="route-controller-manager" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.694451 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.696815 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.697002 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.698975 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.699122 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.699240 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.698915 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.705343 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.706263 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.706794 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.709832 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.710627 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.710828 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.710960 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.711202 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.711335 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.711781 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.719183 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.745770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.745882 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttn5\" (UniqueName: \"kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.745914 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.745945 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.745995 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847136 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttn5\" (UniqueName: \"kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847192 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwv7\" (UniqueName: \"kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847220 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847255 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847319 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847361 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847390 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.847455 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.848909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.848916 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.851840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.854543 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.867038 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttn5\" (UniqueName: \"kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5\") pod \"controller-manager-747c54d555-p78sh\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.949164 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klwv7\" (UniqueName: \"kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.949277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.949360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.949401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.950165 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.950319 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.952877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:11 crc kubenswrapper[4765]: I1203 20:44:11.969269 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klwv7\" (UniqueName: \"kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7\") pod \"route-controller-manager-6b6988469d-5kfkk\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.034131 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.034850 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.044182 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerStarted","Data":"c8d36e54bc750618285e16dd87e98a3ec63eb84b85768d72e428c6c03c5a0db3"} Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.054106 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.063218 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jz5ss" podStartSLOduration=2.589885283 podStartE2EDuration="5.063197738s" podCreationTimestamp="2025-12-03 20:44:07 +0000 UTC" firstStartedPulling="2025-12-03 20:44:09.019317503 +0000 UTC m=+346.949862654" lastFinishedPulling="2025-12-03 20:44:11.492629958 +0000 UTC m=+349.423175109" observedRunningTime="2025-12-03 20:44:12.059797668 +0000 UTC m=+349.990342829" watchObservedRunningTime="2025-12-03 20:44:12.063197738 +0000 UTC m=+349.993742889" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.244867 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.271018 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:12 crc kubenswrapper[4765]: W1203 20:44:12.277720 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcbbdd90_b054_4ba4_a1ec_30a1fd829b32.slice/crio-4888161c77742b58fa4e326f0a090500779f64b3e9d574ec69e21e477128a69e WatchSource:0}: Error finding container 4888161c77742b58fa4e326f0a090500779f64b3e9d574ec69e21e477128a69e: Status 404 returned error can't find the container with id 4888161c77742b58fa4e326f0a090500779f64b3e9d574ec69e21e477128a69e Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.378569 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c890bc-a753-4a38-b8d5-33098898333b" path="/var/lib/kubelet/pods/29c890bc-a753-4a38-b8d5-33098898333b/volumes" Dec 03 20:44:12 crc kubenswrapper[4765]: I1203 20:44:12.379364 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25824b2-7d4e-4fdd-ac80-d2975d802570" path="/var/lib/kubelet/pods/c25824b2-7d4e-4fdd-ac80-d2975d802570/volumes" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.056717 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" event={"ID":"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32","Type":"ContainerStarted","Data":"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572"} Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.057100 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.057134 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" event={"ID":"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32","Type":"ContainerStarted","Data":"4888161c77742b58fa4e326f0a090500779f64b3e9d574ec69e21e477128a69e"} Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.060121 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" event={"ID":"2038c612-e3ae-4c59-a620-78af54d2af88","Type":"ContainerStarted","Data":"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb"} Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.060189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" event={"ID":"2038c612-e3ae-4c59-a620-78af54d2af88","Type":"ContainerStarted","Data":"163169a0cac3943d2cfd5da828de608876ed87dddfdb52be1b7c5be5b103bc84"} Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.064386 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.104272 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" podStartSLOduration=3.104256225 podStartE2EDuration="3.104256225s" podCreationTimestamp="2025-12-03 20:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:44:13.083657598 +0000 UTC m=+351.014202769" watchObservedRunningTime="2025-12-03 20:44:13.104256225 +0000 UTC m=+351.034801376" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.106811 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" podStartSLOduration=3.106805362 podStartE2EDuration="3.106805362s" podCreationTimestamp="2025-12-03 20:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:44:13.102921969 +0000 UTC m=+351.033467130" watchObservedRunningTime="2025-12-03 20:44:13.106805362 +0000 UTC m=+351.037350513" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.136995 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 20:44:13 crc kubenswrapper[4765]: I1203 20:44:13.963143 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 20:44:14 crc kubenswrapper[4765]: I1203 20:44:14.066144 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:14 crc kubenswrapper[4765]: I1203 20:44:14.071190 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:15 crc kubenswrapper[4765]: I1203 20:44:15.102617 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:15 crc kubenswrapper[4765]: I1203 20:44:15.113908 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:15 crc kubenswrapper[4765]: I1203 20:44:15.739235 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 20:44:16 crc kubenswrapper[4765]: I1203 20:44:16.051888 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 20:44:16 crc kubenswrapper[4765]: I1203 20:44:16.074789 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 20:44:16 crc kubenswrapper[4765]: I1203 20:44:16.076719 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" podUID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" containerName="controller-manager" containerID="cri-o://fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572" gracePeriod=30 Dec 03 20:44:16 crc kubenswrapper[4765]: I1203 20:44:16.228001 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 20:44:16 crc kubenswrapper[4765]: I1203 20:44:16.841856 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.016101 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.038599 4765 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.059547 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6486fb96c7-hx6vg"] Dec 03 20:44:17 crc kubenswrapper[4765]: E1203 20:44:17.059884 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" containerName="controller-manager" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.059909 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" containerName="controller-manager" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.060027 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" containerName="controller-manager" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.060505 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.075905 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6486fb96c7-hx6vg"] Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.083394 4765 generic.go:334] "Generic (PLEG): container finished" podID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" containerID="fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572" exitCode=0 Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.083753 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" podUID="2038c612-e3ae-4c59-a620-78af54d2af88" containerName="route-controller-manager" containerID="cri-o://891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb" gracePeriod=30 Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.083921 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.083957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" event={"ID":"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32","Type":"ContainerDied","Data":"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572"} Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.084720 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747c54d555-p78sh" event={"ID":"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32","Type":"ContainerDied","Data":"4888161c77742b58fa4e326f0a090500779f64b3e9d574ec69e21e477128a69e"} Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.084923 4765 scope.go:117] "RemoveContainer" containerID="fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.113892 4765 scope.go:117] "RemoveContainer" containerID="fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.113948 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xttn5\" (UniqueName: \"kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5\") pod \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114169 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert\") pod \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114273 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config\") pod \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114456 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles\") pod \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114560 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca\") pod \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\" (UID: \"bcbbdd90-b054-4ba4-a1ec-30a1fd829b32\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114820 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5275759-9f5d-48c3-91fb-26fbea30b8db-serving-cert\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.114914 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwg6p\" (UniqueName: \"kubernetes.io/projected/f5275759-9f5d-48c3-91fb-26fbea30b8db-kube-api-access-vwg6p\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.115029 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-proxy-ca-bundles\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.115183 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-config\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.115379 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-client-ca\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: E1203 20:44:17.116056 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572\": container with ID starting with fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572 not found: ID does not exist" containerID="fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.116120 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572"} err="failed to get container status \"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572\": rpc error: code = NotFound desc = could not find container \"fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572\": container with ID starting with fc5a5aeb7533b9e639d8a524eee4bf097f87e396f61ee7a90da4df0acb6fa572 not found: ID does not exist" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.116706 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" (UID: "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.116826 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config" (OuterVolumeSpecName: "config") pod "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" (UID: "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.117637 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca" (OuterVolumeSpecName: "client-ca") pod "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" (UID: "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.119314 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5" (OuterVolumeSpecName: "kube-api-access-xttn5") pod "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" (UID: "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32"). InnerVolumeSpecName "kube-api-access-xttn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.121561 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" (UID: "bcbbdd90-b054-4ba4-a1ec-30a1fd829b32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.189184 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217212 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5275759-9f5d-48c3-91fb-26fbea30b8db-serving-cert\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217290 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwg6p\" (UniqueName: \"kubernetes.io/projected/f5275759-9f5d-48c3-91fb-26fbea30b8db-kube-api-access-vwg6p\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-proxy-ca-bundles\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217408 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-config\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217462 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-client-ca\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217532 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xttn5\" (UniqueName: \"kubernetes.io/projected/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-kube-api-access-xttn5\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217548 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217560 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217580 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.217590 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.218580 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-client-ca\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.220050 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-proxy-ca-bundles\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.220554 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5275759-9f5d-48c3-91fb-26fbea30b8db-serving-cert\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.223543 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5275759-9f5d-48c3-91fb-26fbea30b8db-config\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.245002 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwg6p\" (UniqueName: \"kubernetes.io/projected/f5275759-9f5d-48c3-91fb-26fbea30b8db-kube-api-access-vwg6p\") pod \"controller-manager-6486fb96c7-hx6vg\" (UID: \"f5275759-9f5d-48c3-91fb-26fbea30b8db\") " pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.293726 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.339648 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.382967 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.441216 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.441563 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.444043 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-747c54d555-p78sh"] Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.520512 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klwv7\" (UniqueName: \"kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7\") pod \"2038c612-e3ae-4c59-a620-78af54d2af88\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.520597 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca\") pod \"2038c612-e3ae-4c59-a620-78af54d2af88\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.520627 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config\") pod \"2038c612-e3ae-4c59-a620-78af54d2af88\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.520687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert\") pod \"2038c612-e3ae-4c59-a620-78af54d2af88\" (UID: \"2038c612-e3ae-4c59-a620-78af54d2af88\") " Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.521479 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca" (OuterVolumeSpecName: "client-ca") pod "2038c612-e3ae-4c59-a620-78af54d2af88" (UID: "2038c612-e3ae-4c59-a620-78af54d2af88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.521510 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config" (OuterVolumeSpecName: "config") pod "2038c612-e3ae-4c59-a620-78af54d2af88" (UID: "2038c612-e3ae-4c59-a620-78af54d2af88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.523827 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2038c612-e3ae-4c59-a620-78af54d2af88" (UID: "2038c612-e3ae-4c59-a620-78af54d2af88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.524779 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7" (OuterVolumeSpecName: "kube-api-access-klwv7") pod "2038c612-e3ae-4c59-a620-78af54d2af88" (UID: "2038c612-e3ae-4c59-a620-78af54d2af88"). InnerVolumeSpecName "kube-api-access-klwv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.598003 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6486fb96c7-hx6vg"] Dec 03 20:44:17 crc kubenswrapper[4765]: W1203 20:44:17.612647 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5275759_9f5d_48c3_91fb_26fbea30b8db.slice/crio-2bb716ddde092a065a5b915bcd889fd6318c534e863ed1f60b331d9f811a89d2 WatchSource:0}: Error finding container 2bb716ddde092a065a5b915bcd889fd6318c534e863ed1f60b331d9f811a89d2: Status 404 returned error can't find the container with id 2bb716ddde092a065a5b915bcd889fd6318c534e863ed1f60b331d9f811a89d2 Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.622183 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2038c612-e3ae-4c59-a620-78af54d2af88-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.622231 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klwv7\" (UniqueName: \"kubernetes.io/projected/2038c612-e3ae-4c59-a620-78af54d2af88-kube-api-access-klwv7\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.622245 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.622256 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2038c612-e3ae-4c59-a620-78af54d2af88-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.729639 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.729872 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.774487 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:17 crc kubenswrapper[4765]: I1203 20:44:17.805754 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.090876 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" event={"ID":"f5275759-9f5d-48c3-91fb-26fbea30b8db","Type":"ContainerStarted","Data":"46dccc5a41e03249612b96c2e235c98d2f19d7fcbd4b450d22b32ab450021180"} Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.090925 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" event={"ID":"f5275759-9f5d-48c3-91fb-26fbea30b8db","Type":"ContainerStarted","Data":"2bb716ddde092a065a5b915bcd889fd6318c534e863ed1f60b331d9f811a89d2"} Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.091272 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.092802 4765 generic.go:334] "Generic (PLEG): container finished" podID="2038c612-e3ae-4c59-a620-78af54d2af88" containerID="891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb" exitCode=0 Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.092829 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" event={"ID":"2038c612-e3ae-4c59-a620-78af54d2af88","Type":"ContainerDied","Data":"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb"} Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.092848 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.092866 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk" event={"ID":"2038c612-e3ae-4c59-a620-78af54d2af88","Type":"ContainerDied","Data":"163169a0cac3943d2cfd5da828de608876ed87dddfdb52be1b7c5be5b103bc84"} Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.092903 4765 scope.go:117] "RemoveContainer" containerID="891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.097947 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.106926 4765 scope.go:117] "RemoveContainer" containerID="891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.106921 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6486fb96c7-hx6vg" podStartSLOduration=3.106909502 podStartE2EDuration="3.106909502s" podCreationTimestamp="2025-12-03 20:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:44:18.106422398 +0000 UTC m=+356.036967559" watchObservedRunningTime="2025-12-03 20:44:18.106909502 +0000 UTC m=+356.037454653" Dec 03 20:44:18 crc kubenswrapper[4765]: E1203 20:44:18.107448 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb\": container with ID starting with 891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb not found: ID does not exist" containerID="891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.107481 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb"} err="failed to get container status \"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb\": rpc error: code = NotFound desc = could not find container \"891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb\": container with ID starting with 891d808933cbf8802eed322bf78fcc1223fb96d04a3ba0c74da2136d2749a3fb not found: ID does not exist" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.137053 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.143409 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.146196 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6988469d-5kfkk"] Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.307245 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.368852 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2038c612-e3ae-4c59-a620-78af54d2af88" path="/var/lib/kubelet/pods/2038c612-e3ae-4c59-a620-78af54d2af88/volumes" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.369429 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbbdd90-b054-4ba4-a1ec-30a1fd829b32" path="/var/lib/kubelet/pods/bcbbdd90-b054-4ba4-a1ec-30a1fd829b32/volumes" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.499383 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.659226 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.695739 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:44:18 crc kubenswrapper[4765]: E1203 20:44:18.695991 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2038c612-e3ae-4c59-a620-78af54d2af88" containerName="route-controller-manager" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.696010 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2038c612-e3ae-4c59-a620-78af54d2af88" containerName="route-controller-manager" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.696114 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2038c612-e3ae-4c59-a620-78af54d2af88" containerName="route-controller-manager" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.696532 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.699779 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.699945 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.700068 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.700187 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.700322 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.702554 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.710573 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.746190 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.746318 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hvz\" (UniqueName: \"kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.746380 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.746424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.847444 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.847510 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.847570 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hvz\" (UniqueName: \"kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.847595 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.848383 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.849909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.853695 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.865047 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hvz\" (UniqueName: \"kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz\") pod \"route-controller-manager-7c79778b79-td9j8\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:18 crc kubenswrapper[4765]: I1203 20:44:18.978632 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.048260 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.077538 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.451102 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:44:19 crc kubenswrapper[4765]: W1203 20:44:19.461378 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb20d2976_a7d4_4011_ac9d_5b7ab03cf0de.slice/crio-3636c53df999f7c8080c5fc6795c528b246fa28947f7cd14c6485154623e209c WatchSource:0}: Error finding container 3636c53df999f7c8080c5fc6795c528b246fa28947f7cd14c6485154623e209c: Status 404 returned error can't find the container with id 3636c53df999f7c8080c5fc6795c528b246fa28947f7cd14c6485154623e209c Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.649753 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.781680 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 20:44:19 crc kubenswrapper[4765]: I1203 20:44:19.874373 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.106543 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" event={"ID":"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de","Type":"ContainerStarted","Data":"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c"} Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.106614 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" event={"ID":"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de","Type":"ContainerStarted","Data":"3636c53df999f7c8080c5fc6795c528b246fa28947f7cd14c6485154623e209c"} Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.122615 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" podStartSLOduration=3.122594572 podStartE2EDuration="3.122594572s" podCreationTimestamp="2025-12-03 20:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:44:20.120913637 +0000 UTC m=+358.051458788" watchObservedRunningTime="2025-12-03 20:44:20.122594572 +0000 UTC m=+358.053139723" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.247112 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.319422 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.356680 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.565880 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 20:44:20 crc kubenswrapper[4765]: I1203 20:44:20.752200 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 20:44:21 crc kubenswrapper[4765]: I1203 20:44:21.111691 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:21 crc kubenswrapper[4765]: I1203 20:44:21.116567 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:44:21 crc kubenswrapper[4765]: I1203 20:44:21.926547 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 20:44:23 crc kubenswrapper[4765]: I1203 20:44:23.013256 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.132011 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.267646 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.273710 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.429100 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.798763 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:44:24 crc kubenswrapper[4765]: I1203 20:44:24.798890 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:44:25 crc kubenswrapper[4765]: I1203 20:44:25.735151 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 20:44:25 crc kubenswrapper[4765]: I1203 20:44:25.745265 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 20:44:26 crc kubenswrapper[4765]: I1203 20:44:26.824557 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 20:44:26 crc kubenswrapper[4765]: I1203 20:44:26.874954 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 20:44:27 crc kubenswrapper[4765]: I1203 20:44:27.279015 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 20:44:28 crc kubenswrapper[4765]: I1203 20:44:28.604815 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 20:44:28 crc kubenswrapper[4765]: I1203 20:44:28.610225 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.801461 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.802895 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.816920 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.911386 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzk44\" (UniqueName: \"kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.911471 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:29 crc kubenswrapper[4765]: I1203 20:44:29.911505 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.013231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzk44\" (UniqueName: \"kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.013446 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.013523 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.014510 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.014543 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.048811 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzk44\" (UniqueName: \"kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44\") pod \"certified-operators-pjw7v\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.093542 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.131828 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:30 crc kubenswrapper[4765]: I1203 20:44:30.618892 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 20:44:30 crc kubenswrapper[4765]: W1203 20:44:30.628821 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod711b4e95_ecdb_4d3b_9bd9_7a1473108d42.slice/crio-80bba44cfab03665bed0761b8bb67bff977e9dc5bc2958dd18ae88d397f9b43e WatchSource:0}: Error finding container 80bba44cfab03665bed0761b8bb67bff977e9dc5bc2958dd18ae88d397f9b43e: Status 404 returned error can't find the container with id 80bba44cfab03665bed0761b8bb67bff977e9dc5bc2958dd18ae88d397f9b43e Dec 03 20:44:31 crc kubenswrapper[4765]: I1203 20:44:31.185892 4765 generic.go:334] "Generic (PLEG): container finished" podID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerID="50c13a52e0ae690881e532ab74950f1507f7b903bf9d58b795d4773be92dcb89" exitCode=0 Dec 03 20:44:31 crc kubenswrapper[4765]: I1203 20:44:31.185991 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerDied","Data":"50c13a52e0ae690881e532ab74950f1507f7b903bf9d58b795d4773be92dcb89"} Dec 03 20:44:31 crc kubenswrapper[4765]: I1203 20:44:31.186200 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerStarted","Data":"80bba44cfab03665bed0761b8bb67bff977e9dc5bc2958dd18ae88d397f9b43e"} Dec 03 20:44:32 crc kubenswrapper[4765]: I1203 20:44:32.037934 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 20:44:32 crc kubenswrapper[4765]: I1203 20:44:32.194573 4765 generic.go:334] "Generic (PLEG): container finished" podID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerID="ad5f380e91095a9f7b04ab9f3fb538447d74bc055f831b6ff6fba565fbae825a" exitCode=0 Dec 03 20:44:32 crc kubenswrapper[4765]: I1203 20:44:32.194616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerDied","Data":"ad5f380e91095a9f7b04ab9f3fb538447d74bc055f831b6ff6fba565fbae825a"} Dec 03 20:44:32 crc kubenswrapper[4765]: I1203 20:44:32.763946 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 20:44:33 crc kubenswrapper[4765]: I1203 20:44:33.203805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerStarted","Data":"b3f3698de39b006f5317bfacb97417c8eeb92f91ff764b708b4903ea23106f6d"} Dec 03 20:44:33 crc kubenswrapper[4765]: I1203 20:44:33.228465 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pjw7v" podStartSLOduration=2.806705814 podStartE2EDuration="4.228435206s" podCreationTimestamp="2025-12-03 20:44:29 +0000 UTC" firstStartedPulling="2025-12-03 20:44:31.189134968 +0000 UTC m=+369.119680149" lastFinishedPulling="2025-12-03 20:44:32.61086435 +0000 UTC m=+370.541409541" observedRunningTime="2025-12-03 20:44:33.219706864 +0000 UTC m=+371.150252025" watchObservedRunningTime="2025-12-03 20:44:33.228435206 +0000 UTC m=+371.158980397" Dec 03 20:44:33 crc kubenswrapper[4765]: I1203 20:44:33.288795 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 20:44:33 crc kubenswrapper[4765]: I1203 20:44:33.712746 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 20:44:34 crc kubenswrapper[4765]: I1203 20:44:34.573751 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 20:44:35 crc kubenswrapper[4765]: I1203 20:44:35.004146 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 20:44:35 crc kubenswrapper[4765]: I1203 20:44:35.549770 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 20:44:36 crc kubenswrapper[4765]: I1203 20:44:36.739565 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 20:44:38 crc kubenswrapper[4765]: I1203 20:44:38.098944 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 20:44:40 crc kubenswrapper[4765]: I1203 20:44:40.132602 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:40 crc kubenswrapper[4765]: I1203 20:44:40.132713 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:40 crc kubenswrapper[4765]: I1203 20:44:40.201038 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:40 crc kubenswrapper[4765]: I1203 20:44:40.299980 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 20:44:43 crc kubenswrapper[4765]: I1203 20:44:43.592560 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 20:44:45 crc kubenswrapper[4765]: I1203 20:44:45.192332 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 20:44:54 crc kubenswrapper[4765]: I1203 20:44:54.799116 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:44:54 crc kubenswrapper[4765]: I1203 20:44:54.799936 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.198615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd"] Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.199680 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.204278 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.204347 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.213409 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd"] Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.350624 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kf8r\" (UniqueName: \"kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.351675 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.351808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.453282 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.453431 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kf8r\" (UniqueName: \"kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.453502 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.454524 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.470801 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.475654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kf8r\" (UniqueName: \"kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r\") pod \"collect-profiles-29413245-n5gdd\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:00 crc kubenswrapper[4765]: I1203 20:45:00.545994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:01 crc kubenswrapper[4765]: I1203 20:45:01.000166 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd"] Dec 03 20:45:01 crc kubenswrapper[4765]: I1203 20:45:01.370703 4765 generic.go:334] "Generic (PLEG): container finished" podID="93c59fd5-d633-4b31-b5fd-7171033bc0de" containerID="e2f09819e7bf142fca812c117fd1c6dc7458e7c5bfb4ce9453f2e87ffc1acd57" exitCode=0 Dec 03 20:45:01 crc kubenswrapper[4765]: I1203 20:45:01.370766 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" event={"ID":"93c59fd5-d633-4b31-b5fd-7171033bc0de","Type":"ContainerDied","Data":"e2f09819e7bf142fca812c117fd1c6dc7458e7c5bfb4ce9453f2e87ffc1acd57"} Dec 03 20:45:01 crc kubenswrapper[4765]: I1203 20:45:01.370787 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" event={"ID":"93c59fd5-d633-4b31-b5fd-7171033bc0de","Type":"ContainerStarted","Data":"6c3290e3e734742724ca01841cd11bd5a757b2e5964232744dc63ba1397004c7"} Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.443875 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6q5mj"] Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.444942 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.486193 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6q5mj"] Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579057 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-tls\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579133 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87f6848-82fe-45f1-bcaa-d81107578f9b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579175 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579211 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjnrz\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-kube-api-access-rjnrz\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579260 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87f6848-82fe-45f1-bcaa-d81107578f9b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579288 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-trusted-ca\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579444 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-bound-sa-token\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.579524 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-certificates\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.633574 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.680890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-bound-sa-token\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.680951 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-certificates\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.680977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-tls\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.681008 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87f6848-82fe-45f1-bcaa-d81107578f9b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.681033 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjnrz\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-kube-api-access-rjnrz\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.681073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87f6848-82fe-45f1-bcaa-d81107578f9b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.681094 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-trusted-ca\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.681806 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f87f6848-82fe-45f1-bcaa-d81107578f9b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.682572 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-trusted-ca\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.682882 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-certificates\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.686963 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f87f6848-82fe-45f1-bcaa-d81107578f9b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.691083 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-registry-tls\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.699176 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-bound-sa-token\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.701227 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjnrz\" (UniqueName: \"kubernetes.io/projected/f87f6848-82fe-45f1-bcaa-d81107578f9b-kube-api-access-rjnrz\") pod \"image-registry-66df7c8f76-6q5mj\" (UID: \"f87f6848-82fe-45f1-bcaa-d81107578f9b\") " pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.746619 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.769156 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.781930 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume\") pod \"93c59fd5-d633-4b31-b5fd-7171033bc0de\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.782074 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kf8r\" (UniqueName: \"kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r\") pod \"93c59fd5-d633-4b31-b5fd-7171033bc0de\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.782187 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume\") pod \"93c59fd5-d633-4b31-b5fd-7171033bc0de\" (UID: \"93c59fd5-d633-4b31-b5fd-7171033bc0de\") " Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.783521 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume" (OuterVolumeSpecName: "config-volume") pod "93c59fd5-d633-4b31-b5fd-7171033bc0de" (UID: "93c59fd5-d633-4b31-b5fd-7171033bc0de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.790393 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93c59fd5-d633-4b31-b5fd-7171033bc0de" (UID: "93c59fd5-d633-4b31-b5fd-7171033bc0de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.790438 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r" (OuterVolumeSpecName: "kube-api-access-5kf8r") pod "93c59fd5-d633-4b31-b5fd-7171033bc0de" (UID: "93c59fd5-d633-4b31-b5fd-7171033bc0de"). InnerVolumeSpecName "kube-api-access-5kf8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.883436 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c59fd5-d633-4b31-b5fd-7171033bc0de-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.883464 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93c59fd5-d633-4b31-b5fd-7171033bc0de-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:02 crc kubenswrapper[4765]: I1203 20:45:02.883476 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kf8r\" (UniqueName: \"kubernetes.io/projected/93c59fd5-d633-4b31-b5fd-7171033bc0de-kube-api-access-5kf8r\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.196975 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6q5mj"] Dec 03 20:45:03 crc kubenswrapper[4765]: W1203 20:45:03.203172 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf87f6848_82fe_45f1_bcaa_d81107578f9b.slice/crio-87c9943bdc343dbd9650789045fa264d8fdf836a6580f33f1e724b1c30642d0a WatchSource:0}: Error finding container 87c9943bdc343dbd9650789045fa264d8fdf836a6580f33f1e724b1c30642d0a: Status 404 returned error can't find the container with id 87c9943bdc343dbd9650789045fa264d8fdf836a6580f33f1e724b1c30642d0a Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.387511 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" event={"ID":"f87f6848-82fe-45f1-bcaa-d81107578f9b","Type":"ContainerStarted","Data":"77a6bba59a92e4d0393de2cba6138d09c534832ef54283a0f67c39d38fc6d749"} Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.387838 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" event={"ID":"f87f6848-82fe-45f1-bcaa-d81107578f9b","Type":"ContainerStarted","Data":"87c9943bdc343dbd9650789045fa264d8fdf836a6580f33f1e724b1c30642d0a"} Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.387879 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.390582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" event={"ID":"93c59fd5-d633-4b31-b5fd-7171033bc0de","Type":"ContainerDied","Data":"6c3290e3e734742724ca01841cd11bd5a757b2e5964232744dc63ba1397004c7"} Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.390630 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c3290e3e734742724ca01841cd11bd5a757b2e5964232744dc63ba1397004c7" Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.390646 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd" Dec 03 20:45:03 crc kubenswrapper[4765]: I1203 20:45:03.415952 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" podStartSLOduration=1.41592873 podStartE2EDuration="1.41592873s" podCreationTimestamp="2025-12-03 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:45:03.411077816 +0000 UTC m=+401.341622967" watchObservedRunningTime="2025-12-03 20:45:03.41592873 +0000 UTC m=+401.346473881" Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.430241 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.430983 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" podUID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" containerName="route-controller-manager" containerID="cri-o://59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c" gracePeriod=30 Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.795925 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.901759 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca\") pod \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.901837 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hvz\" (UniqueName: \"kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz\") pod \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.901916 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config\") pod \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.901965 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert\") pod \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\" (UID: \"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de\") " Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.903855 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config" (OuterVolumeSpecName: "config") pod "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" (UID: "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.904119 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca" (OuterVolumeSpecName: "client-ca") pod "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" (UID: "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.910192 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz" (OuterVolumeSpecName: "kube-api-access-85hvz") pod "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" (UID: "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de"). InnerVolumeSpecName "kube-api-access-85hvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:45:10 crc kubenswrapper[4765]: I1203 20:45:10.911788 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" (UID: "b20d2976-a7d4-4011-ac9d-5b7ab03cf0de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.003883 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.003936 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.003954 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.003972 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hvz\" (UniqueName: \"kubernetes.io/projected/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de-kube-api-access-85hvz\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.437825 4765 generic.go:334] "Generic (PLEG): container finished" podID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" containerID="59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c" exitCode=0 Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.437887 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" event={"ID":"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de","Type":"ContainerDied","Data":"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c"} Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.437935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" event={"ID":"b20d2976-a7d4-4011-ac9d-5b7ab03cf0de","Type":"ContainerDied","Data":"3636c53df999f7c8080c5fc6795c528b246fa28947f7cd14c6485154623e209c"} Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.437954 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.437973 4765 scope.go:117] "RemoveContainer" containerID="59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.465845 4765 scope.go:117] "RemoveContainer" containerID="59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c" Dec 03 20:45:11 crc kubenswrapper[4765]: E1203 20:45:11.466401 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c\": container with ID starting with 59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c not found: ID does not exist" containerID="59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.466471 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c"} err="failed to get container status \"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c\": rpc error: code = NotFound desc = could not find container \"59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c\": container with ID starting with 59a86703c8a6c354d25ed07f6a11b02ee70975c50fd3ebe6f75e0df88052c04c not found: ID does not exist" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.494828 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.502989 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c79778b79-td9j8"] Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.734464 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw"] Dec 03 20:45:11 crc kubenswrapper[4765]: E1203 20:45:11.734754 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" containerName="route-controller-manager" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.734778 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" containerName="route-controller-manager" Dec 03 20:45:11 crc kubenswrapper[4765]: E1203 20:45:11.734803 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c59fd5-d633-4b31-b5fd-7171033bc0de" containerName="collect-profiles" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.734812 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c59fd5-d633-4b31-b5fd-7171033bc0de" containerName="collect-profiles" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.734934 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c59fd5-d633-4b31-b5fd-7171033bc0de" containerName="collect-profiles" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.734956 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" containerName="route-controller-manager" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.735403 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.740435 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.741419 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.741678 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.741927 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.742595 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.747819 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.752832 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw"] Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.916007 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-serving-cert\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.916255 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-client-ca\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.916389 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-config\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:11 crc kubenswrapper[4765]: I1203 20:45:11.916480 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fq2m\" (UniqueName: \"kubernetes.io/projected/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-kube-api-access-9fq2m\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.017779 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-config\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.017894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fq2m\" (UniqueName: \"kubernetes.io/projected/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-kube-api-access-9fq2m\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.018010 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-serving-cert\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.018096 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-client-ca\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.019798 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-client-ca\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.020009 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-config\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.024027 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-serving-cert\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.034179 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fq2m\" (UniqueName: \"kubernetes.io/projected/ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8-kube-api-access-9fq2m\") pod \"route-controller-manager-86ff8f45b-nkstw\" (UID: \"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8\") " pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.056959 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.370668 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20d2976-a7d4-4011-ac9d-5b7ab03cf0de" path="/var/lib/kubelet/pods/b20d2976-a7d4-4011-ac9d-5b7ab03cf0de/volumes" Dec 03 20:45:12 crc kubenswrapper[4765]: I1203 20:45:12.476661 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw"] Dec 03 20:45:13 crc kubenswrapper[4765]: I1203 20:45:13.452096 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" event={"ID":"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8","Type":"ContainerStarted","Data":"33bd9d870db7f4ab8eee74f2dafc97604670887aa73b06eed6271a549bbed687"} Dec 03 20:45:13 crc kubenswrapper[4765]: I1203 20:45:13.452565 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" event={"ID":"ed1619a5-2fad-4eac-81d4-e2c39a1a6dc8","Type":"ContainerStarted","Data":"0c96bfd48f151b91a1a9fd0f6f501bdabaec23dc2d89b96dedbc23e9600be0f9"} Dec 03 20:45:13 crc kubenswrapper[4765]: I1203 20:45:13.452606 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:13 crc kubenswrapper[4765]: I1203 20:45:13.461755 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" Dec 03 20:45:13 crc kubenswrapper[4765]: I1203 20:45:13.476207 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86ff8f45b-nkstw" podStartSLOduration=3.476164423 podStartE2EDuration="3.476164423s" podCreationTimestamp="2025-12-03 20:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:45:13.474243753 +0000 UTC m=+411.404788924" watchObservedRunningTime="2025-12-03 20:45:13.476164423 +0000 UTC m=+411.406709594" Dec 03 20:45:22 crc kubenswrapper[4765]: I1203 20:45:22.777592 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6q5mj" Dec 03 20:45:22 crc kubenswrapper[4765]: I1203 20:45:22.856648 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:45:24 crc kubenswrapper[4765]: I1203 20:45:24.798130 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:45:24 crc kubenswrapper[4765]: I1203 20:45:24.798612 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:45:24 crc kubenswrapper[4765]: I1203 20:45:24.798700 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:45:24 crc kubenswrapper[4765]: I1203 20:45:24.799470 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:45:24 crc kubenswrapper[4765]: I1203 20:45:24.799583 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec" gracePeriod=600 Dec 03 20:45:25 crc kubenswrapper[4765]: I1203 20:45:25.536117 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec" exitCode=0 Dec 03 20:45:25 crc kubenswrapper[4765]: I1203 20:45:25.536229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec"} Dec 03 20:45:25 crc kubenswrapper[4765]: I1203 20:45:25.536711 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe"} Dec 03 20:45:25 crc kubenswrapper[4765]: I1203 20:45:25.536757 4765 scope.go:117] "RemoveContainer" containerID="fe774c12f96bfaaffd357caf8a6b178bb4e749cab0830b14e2fdc74a5b9ec33d" Dec 03 20:45:47 crc kubenswrapper[4765]: I1203 20:45:47.919948 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" podUID="ad8c5639-241d-47bb-8228-d08219c7c882" containerName="registry" containerID="cri-o://67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec" gracePeriod=30 Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.315822 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456118 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456155 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456270 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456325 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456348 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456766 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7689l\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.456926 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ad8c5639-241d-47bb-8228-d08219c7c882\" (UID: \"ad8c5639-241d-47bb-8228-d08219c7c882\") " Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.457281 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.458402 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.462874 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.463623 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l" (OuterVolumeSpecName: "kube-api-access-7689l") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "kube-api-access-7689l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.464014 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.464928 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.466106 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.473224 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ad8c5639-241d-47bb-8228-d08219c7c882" (UID: "ad8c5639-241d-47bb-8228-d08219c7c882"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558147 4765 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558184 4765 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558195 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7689l\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-kube-api-access-7689l\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558203 4765 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ad8c5639-241d-47bb-8228-d08219c7c882-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558213 4765 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ad8c5639-241d-47bb-8228-d08219c7c882-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558221 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad8c5639-241d-47bb-8228-d08219c7c882-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.558229 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad8c5639-241d-47bb-8228-d08219c7c882-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.692455 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad8c5639-241d-47bb-8228-d08219c7c882" containerID="67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec" exitCode=0 Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.692521 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" event={"ID":"ad8c5639-241d-47bb-8228-d08219c7c882","Type":"ContainerDied","Data":"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec"} Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.692563 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" event={"ID":"ad8c5639-241d-47bb-8228-d08219c7c882","Type":"ContainerDied","Data":"b476781554918df824164e065f2e25ce165febc386ddf9767cf9384400e8edfb"} Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.692596 4765 scope.go:117] "RemoveContainer" containerID="67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.692758 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mj2gq" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.717164 4765 scope.go:117] "RemoveContainer" containerID="67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec" Dec 03 20:45:48 crc kubenswrapper[4765]: E1203 20:45:48.717984 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec\": container with ID starting with 67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec not found: ID does not exist" containerID="67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.718025 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec"} err="failed to get container status \"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec\": rpc error: code = NotFound desc = could not find container \"67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec\": container with ID starting with 67eec13890a8108053775bd0c380e0dfa483b1d21f970e4b9c726ea691e8e6ec not found: ID does not exist" Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.731616 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:45:48 crc kubenswrapper[4765]: I1203 20:45:48.738346 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mj2gq"] Dec 03 20:45:50 crc kubenswrapper[4765]: I1203 20:45:50.370610 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8c5639-241d-47bb-8228-d08219c7c882" path="/var/lib/kubelet/pods/ad8c5639-241d-47bb-8228-d08219c7c882/volumes" Dec 03 20:47:54 crc kubenswrapper[4765]: I1203 20:47:54.799207 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:47:54 crc kubenswrapper[4765]: I1203 20:47:54.799904 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.086735 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-h5t5j"] Dec 03 20:47:56 crc kubenswrapper[4765]: E1203 20:47:56.087715 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8c5639-241d-47bb-8228-d08219c7c882" containerName="registry" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.087750 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8c5639-241d-47bb-8228-d08219c7c882" containerName="registry" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.088047 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8c5639-241d-47bb-8228-d08219c7c882" containerName="registry" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.088764 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-h5t5j" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.090981 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vf7sc"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.091918 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.092866 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.093339 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fsnqn" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.093555 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.094410 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4g6jx" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.096547 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-twkk9"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.110052 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.110059 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlwrc\" (UniqueName: \"kubernetes.io/projected/42acdc1c-8668-4544-886a-4346236c7e76-kube-api-access-qlwrc\") pod \"cert-manager-5b446d88c5-h5t5j\" (UID: \"42acdc1c-8668-4544-886a-4346236c7e76\") " pod="cert-manager/cert-manager-5b446d88c5-h5t5j" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.114525 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8zg4r" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.125083 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-twkk9"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.138407 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-h5t5j"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.140600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vf7sc"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.212519 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlwrc\" (UniqueName: \"kubernetes.io/projected/42acdc1c-8668-4544-886a-4346236c7e76-kube-api-access-qlwrc\") pod \"cert-manager-5b446d88c5-h5t5j\" (UID: \"42acdc1c-8668-4544-886a-4346236c7e76\") " pod="cert-manager/cert-manager-5b446d88c5-h5t5j" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.212608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9g5w\" (UniqueName: \"kubernetes.io/projected/293b6288-6f0b-4e96-815a-3dffcd7a641c-kube-api-access-z9g5w\") pod \"cert-manager-webhook-5655c58dd6-twkk9\" (UID: \"293b6288-6f0b-4e96-815a-3dffcd7a641c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.212649 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8vjz\" (UniqueName: \"kubernetes.io/projected/aca9fa03-3bb8-4912-aa71-037533fe4b0d-kube-api-access-v8vjz\") pod \"cert-manager-cainjector-7f985d654d-vf7sc\" (UID: \"aca9fa03-3bb8-4912-aa71-037533fe4b0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.232798 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlwrc\" (UniqueName: \"kubernetes.io/projected/42acdc1c-8668-4544-886a-4346236c7e76-kube-api-access-qlwrc\") pod \"cert-manager-5b446d88c5-h5t5j\" (UID: \"42acdc1c-8668-4544-886a-4346236c7e76\") " pod="cert-manager/cert-manager-5b446d88c5-h5t5j" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.313624 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9g5w\" (UniqueName: \"kubernetes.io/projected/293b6288-6f0b-4e96-815a-3dffcd7a641c-kube-api-access-z9g5w\") pod \"cert-manager-webhook-5655c58dd6-twkk9\" (UID: \"293b6288-6f0b-4e96-815a-3dffcd7a641c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.313690 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8vjz\" (UniqueName: \"kubernetes.io/projected/aca9fa03-3bb8-4912-aa71-037533fe4b0d-kube-api-access-v8vjz\") pod \"cert-manager-cainjector-7f985d654d-vf7sc\" (UID: \"aca9fa03-3bb8-4912-aa71-037533fe4b0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.332026 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9g5w\" (UniqueName: \"kubernetes.io/projected/293b6288-6f0b-4e96-815a-3dffcd7a641c-kube-api-access-z9g5w\") pod \"cert-manager-webhook-5655c58dd6-twkk9\" (UID: \"293b6288-6f0b-4e96-815a-3dffcd7a641c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.343768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8vjz\" (UniqueName: \"kubernetes.io/projected/aca9fa03-3bb8-4912-aa71-037533fe4b0d-kube-api-access-v8vjz\") pod \"cert-manager-cainjector-7f985d654d-vf7sc\" (UID: \"aca9fa03-3bb8-4912-aa71-037533fe4b0d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.420651 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-h5t5j" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.434994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.449222 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.646126 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-twkk9"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.656009 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.690632 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-h5t5j"] Dec 03 20:47:56 crc kubenswrapper[4765]: I1203 20:47:56.910349 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vf7sc"] Dec 03 20:47:56 crc kubenswrapper[4765]: W1203 20:47:56.912917 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca9fa03_3bb8_4912_aa71_037533fe4b0d.slice/crio-e458b8b4848d3becd0b49720a16c7d0dbf9fc62104874edf6d9d9241391c25e9 WatchSource:0}: Error finding container e458b8b4848d3becd0b49720a16c7d0dbf9fc62104874edf6d9d9241391c25e9: Status 404 returned error can't find the container with id e458b8b4848d3becd0b49720a16c7d0dbf9fc62104874edf6d9d9241391c25e9 Dec 03 20:47:57 crc kubenswrapper[4765]: I1203 20:47:57.568127 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" event={"ID":"aca9fa03-3bb8-4912-aa71-037533fe4b0d","Type":"ContainerStarted","Data":"e458b8b4848d3becd0b49720a16c7d0dbf9fc62104874edf6d9d9241391c25e9"} Dec 03 20:47:57 crc kubenswrapper[4765]: I1203 20:47:57.573705 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" event={"ID":"293b6288-6f0b-4e96-815a-3dffcd7a641c","Type":"ContainerStarted","Data":"371430907f10dd4fcce44ef297dcf9aef6e6c5e89398d2a5a675ca39f8671584"} Dec 03 20:47:57 crc kubenswrapper[4765]: I1203 20:47:57.575684 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-h5t5j" event={"ID":"42acdc1c-8668-4544-886a-4346236c7e76","Type":"ContainerStarted","Data":"20cae0fc7c7a4adfb8226d43e4a9b351ccef87dba0e84839fe6e9425d4613ef7"} Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.595939 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-h5t5j" event={"ID":"42acdc1c-8668-4544-886a-4346236c7e76","Type":"ContainerStarted","Data":"a81bf34b88766d0080abcc42b9b0489588b9d0ce1e24294dfcee3386d1309496"} Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.598310 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" event={"ID":"aca9fa03-3bb8-4912-aa71-037533fe4b0d","Type":"ContainerStarted","Data":"202ae4cc813e52c18dd9e8b594eb74bed4b81db2dd8527d86c244c84d20bbd23"} Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.600240 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" event={"ID":"293b6288-6f0b-4e96-815a-3dffcd7a641c","Type":"ContainerStarted","Data":"dcc9ae426aeed3e42e9fcdb45f96c32ff00ae4d4ce030e4531aeab6e54c9199d"} Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.600441 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.617731 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-h5t5j" podStartSLOduration=1.477643326 podStartE2EDuration="4.61770765s" podCreationTimestamp="2025-12-03 20:47:56 +0000 UTC" firstStartedPulling="2025-12-03 20:47:56.697165478 +0000 UTC m=+574.627710629" lastFinishedPulling="2025-12-03 20:47:59.837229792 +0000 UTC m=+577.767774953" observedRunningTime="2025-12-03 20:48:00.612869685 +0000 UTC m=+578.543414856" watchObservedRunningTime="2025-12-03 20:48:00.61770765 +0000 UTC m=+578.548252801" Dec 03 20:48:00 crc kubenswrapper[4765]: I1203 20:48:00.638803 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" podStartSLOduration=1.45500057 podStartE2EDuration="4.638786723s" podCreationTimestamp="2025-12-03 20:47:56 +0000 UTC" firstStartedPulling="2025-12-03 20:47:56.655758934 +0000 UTC m=+574.586304085" lastFinishedPulling="2025-12-03 20:47:59.839545047 +0000 UTC m=+577.770090238" observedRunningTime="2025-12-03 20:48:00.633482254 +0000 UTC m=+578.564027425" watchObservedRunningTime="2025-12-03 20:48:00.638786723 +0000 UTC m=+578.569331884" Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.453708 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-twkk9" Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.486763 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-vf7sc" podStartSLOduration=7.495859461 podStartE2EDuration="10.486730662s" podCreationTimestamp="2025-12-03 20:47:56 +0000 UTC" firstStartedPulling="2025-12-03 20:47:56.916058651 +0000 UTC m=+574.846603812" lastFinishedPulling="2025-12-03 20:47:59.906929862 +0000 UTC m=+577.837475013" observedRunningTime="2025-12-03 20:48:00.658687652 +0000 UTC m=+578.589232823" watchObservedRunningTime="2025-12-03 20:48:06.486730662 +0000 UTC m=+584.417275873" Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.857019 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9dzdh"] Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.857896 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-controller" containerID="cri-o://aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.858026 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.858095 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-acl-logging" containerID="cri-o://42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.857968 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="nbdb" containerID="cri-o://c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.858241 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="northd" containerID="cri-o://5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.858289 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-node" containerID="cri-o://422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.858367 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="sbdb" containerID="cri-o://f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" gracePeriod=30 Dec 03 20:48:06 crc kubenswrapper[4765]: I1203 20:48:06.909430 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" containerID="cri-o://578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" gracePeriod=30 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.167666 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/3.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.172346 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovn-acl-logging/0.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.172961 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovn-controller/0.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.174023 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.241444 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7bzj8"] Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242130 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242213 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242236 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-node" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242256 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-node" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242375 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="northd" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242394 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="northd" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242466 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-acl-logging" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242486 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-acl-logging" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242547 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="sbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242564 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="sbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242588 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242646 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242666 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242683 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242746 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242764 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242829 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kubecfg-setup" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242849 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kubecfg-setup" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242871 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="nbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242928 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="nbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.242947 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.242962 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243437 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="northd" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243521 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243542 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243609 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243635 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="nbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243702 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-acl-logging" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243722 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovn-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243744 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243812 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-node" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243911 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.243939 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="sbdb" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.244206 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.244240 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.244264 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.244280 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.244720 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerName="ovnkube-controller" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.246955 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273198 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273274 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273379 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273423 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log" (OuterVolumeSpecName: "node-log") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273439 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273483 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273545 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf4c4\" (UniqueName: \"kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273607 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273660 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273770 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273814 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273813 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273871 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273891 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273919 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273896 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273936 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273933 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273923 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273972 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash" (OuterVolumeSpecName: "host-slash") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273974 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273957 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.273994 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274006 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274029 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274061 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274079 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274090 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274095 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket" (OuterVolumeSpecName: "log-socket") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274112 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274113 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274128 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274182 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn\") pod \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\" (UID: \"ad2eb102-7abd-48ad-8287-ab7d2d8a4166\") " Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274228 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274371 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274373 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274429 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-ovn\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274451 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-netns\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274468 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-log-socket\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274490 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-netd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274393 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274499 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274627 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-kubelet\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274649 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-node-log\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274664 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274685 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-systemd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274702 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-var-lib-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274719 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-bin\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274878 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-env-overrides\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.274926 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-script-lib\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275002 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-systemd-units\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275083 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovn-node-metrics-cert\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275170 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-slash\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275207 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p46g6\" (UniqueName: \"kubernetes.io/projected/cc7d707d-839d-454b-af0d-1ae93c2af2e4-kube-api-access-p46g6\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275256 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-etc-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275349 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-config\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275478 4765 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275530 4765 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275558 4765 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275582 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275603 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275624 4765 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275648 4765 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275668 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275689 4765 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275709 4765 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275729 4765 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275752 4765 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275773 4765 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275832 4765 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275854 4765 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275874 4765 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.275895 4765 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.285915 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4" (OuterVolumeSpecName: "kube-api-access-lf4c4") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "kube-api-access-lf4c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.285942 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.301640 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ad2eb102-7abd-48ad-8287-ab7d2d8a4166" (UID: "ad2eb102-7abd-48ad-8287-ab7d2d8a4166"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.376869 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-netd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377489 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-node-log\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.376987 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-netd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377575 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-node-log\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377703 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-kubelet\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377827 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377867 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-systemd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-var-lib-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-systemd\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377937 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-bin\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377962 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-var-lib-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.377987 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-cni-bin\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-env-overrides\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378038 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-script-lib\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378074 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovn-node-metrics-cert\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-systemd-units\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378225 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-slash\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378223 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378248 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p46g6\" (UniqueName: \"kubernetes.io/projected/cc7d707d-839d-454b-af0d-1ae93c2af2e4-kube-api-access-p46g6\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378364 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-etc-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378386 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-config\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378461 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378490 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-ovn\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378523 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-netns\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378545 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-log-socket\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378633 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf4c4\" (UniqueName: \"kubernetes.io/projected/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-kube-api-access-lf4c4\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378654 4765 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378666 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad2eb102-7abd-48ad-8287-ab7d2d8a4166-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-log-socket\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378731 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-etc-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378739 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-script-lib\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-ovn\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378816 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-run-openvswitch\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378838 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-systemd-units\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-slash\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.378894 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-run-netns\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.379209 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-env-overrides\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.379410 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovnkube-config\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.380052 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7d707d-839d-454b-af0d-1ae93c2af2e4-host-kubelet\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.384886 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7d707d-839d-454b-af0d-1ae93c2af2e4-ovn-node-metrics-cert\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.398909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p46g6\" (UniqueName: \"kubernetes.io/projected/cc7d707d-839d-454b-af0d-1ae93c2af2e4-kube-api-access-p46g6\") pod \"ovnkube-node-7bzj8\" (UID: \"cc7d707d-839d-454b-af0d-1ae93c2af2e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.569101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:07 crc kubenswrapper[4765]: W1203 20:48:07.585510 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc7d707d_839d_454b_af0d_1ae93c2af2e4.slice/crio-8b616e012aa47b040d02fa5ca3ab4962c98324456b5f29dba582d6f0d33dcd28 WatchSource:0}: Error finding container 8b616e012aa47b040d02fa5ca3ab4962c98324456b5f29dba582d6f0d33dcd28: Status 404 returned error can't find the container with id 8b616e012aa47b040d02fa5ca3ab4962c98324456b5f29dba582d6f0d33dcd28 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.653583 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovnkube-controller/3.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.655646 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovn-acl-logging/0.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656088 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9dzdh_ad2eb102-7abd-48ad-8287-ab7d2d8a4166/ovn-controller/0.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656670 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656760 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656777 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656844 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656872 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.656771 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657031 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657102 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657177 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657240 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" exitCode=0 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657326 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" exitCode=143 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657401 4765 generic.go:334] "Generic (PLEG): container finished" podID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" exitCode=143 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657121 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657759 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657812 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657837 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657856 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657875 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657887 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657898 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657910 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657921 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657932 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657944 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657955 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657972 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.657997 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658009 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658020 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658032 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658044 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658055 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658066 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658076 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658086 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658097 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658113 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658130 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658144 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658155 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658166 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658176 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658187 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658197 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658209 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658220 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658231 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9dzdh" event={"ID":"ad2eb102-7abd-48ad-8287-ab7d2d8a4166","Type":"ContainerDied","Data":"5a8a417a4bf296c5b255e2abd0147fa22b7c9a3c8bd9570a92ca7d94204c7c23"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658262 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658274 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658285 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658323 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658334 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658345 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658358 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658369 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658381 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.658394 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.661573 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"8b616e012aa47b040d02fa5ca3ab4962c98324456b5f29dba582d6f0d33dcd28"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.663714 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/2.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.664383 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/1.log" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.664453 4765 generic.go:334] "Generic (PLEG): container finished" podID="2d91ef96-b0c9-43eb-8d49-e522199942c9" containerID="9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da" exitCode=2 Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.664508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerDied","Data":"9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.664543 4765 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1"} Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.665146 4765 scope.go:117] "RemoveContainer" containerID="9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.665450 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-p9xkg_openshift-multus(2d91ef96-b0c9-43eb-8d49-e522199942c9)\"" pod="openshift-multus/multus-p9xkg" podUID="2d91ef96-b0c9-43eb-8d49-e522199942c9" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.683404 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.701478 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9dzdh"] Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.706691 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9dzdh"] Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.713289 4765 scope.go:117] "RemoveContainer" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.764730 4765 scope.go:117] "RemoveContainer" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.784406 4765 scope.go:117] "RemoveContainer" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.798312 4765 scope.go:117] "RemoveContainer" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.818440 4765 scope.go:117] "RemoveContainer" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.838707 4765 scope.go:117] "RemoveContainer" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.866680 4765 scope.go:117] "RemoveContainer" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.888025 4765 scope.go:117] "RemoveContainer" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.906896 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.907269 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.907316 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} err="failed to get container status \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.907342 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.907762 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": container with ID starting with 0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183 not found: ID does not exist" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.907797 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} err="failed to get container status \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": rpc error: code = NotFound desc = could not find container \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": container with ID starting with 0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.907842 4765 scope.go:117] "RemoveContainer" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.908149 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": container with ID starting with f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c not found: ID does not exist" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.908174 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} err="failed to get container status \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": rpc error: code = NotFound desc = could not find container \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": container with ID starting with f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.908215 4765 scope.go:117] "RemoveContainer" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.908801 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": container with ID starting with c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693 not found: ID does not exist" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.908831 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} err="failed to get container status \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": rpc error: code = NotFound desc = could not find container \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": container with ID starting with c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.908851 4765 scope.go:117] "RemoveContainer" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.909175 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": container with ID starting with 5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc not found: ID does not exist" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909214 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} err="failed to get container status \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": rpc error: code = NotFound desc = could not find container \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": container with ID starting with 5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909259 4765 scope.go:117] "RemoveContainer" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.909605 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": container with ID starting with d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96 not found: ID does not exist" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909626 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} err="failed to get container status \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": rpc error: code = NotFound desc = could not find container \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": container with ID starting with d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909644 4765 scope.go:117] "RemoveContainer" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.909891 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": container with ID starting with 422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae not found: ID does not exist" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909935 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} err="failed to get container status \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": rpc error: code = NotFound desc = could not find container \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": container with ID starting with 422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.909961 4765 scope.go:117] "RemoveContainer" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.910373 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": container with ID starting with 42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000 not found: ID does not exist" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.910431 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} err="failed to get container status \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": rpc error: code = NotFound desc = could not find container \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": container with ID starting with 42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.910450 4765 scope.go:117] "RemoveContainer" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.910808 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": container with ID starting with aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203 not found: ID does not exist" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.910845 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} err="failed to get container status \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": rpc error: code = NotFound desc = could not find container \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": container with ID starting with aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.910869 4765 scope.go:117] "RemoveContainer" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: E1203 20:48:07.911181 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": container with ID starting with d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e not found: ID does not exist" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.911208 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} err="failed to get container status \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": rpc error: code = NotFound desc = could not find container \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": container with ID starting with d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.911250 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.911712 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} err="failed to get container status \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.911760 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912051 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} err="failed to get container status \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": rpc error: code = NotFound desc = could not find container \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": container with ID starting with 0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912084 4765 scope.go:117] "RemoveContainer" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912384 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} err="failed to get container status \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": rpc error: code = NotFound desc = could not find container \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": container with ID starting with f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912434 4765 scope.go:117] "RemoveContainer" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912774 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} err="failed to get container status \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": rpc error: code = NotFound desc = could not find container \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": container with ID starting with c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.912825 4765 scope.go:117] "RemoveContainer" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913181 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} err="failed to get container status \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": rpc error: code = NotFound desc = could not find container \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": container with ID starting with 5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913208 4765 scope.go:117] "RemoveContainer" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913574 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} err="failed to get container status \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": rpc error: code = NotFound desc = could not find container \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": container with ID starting with d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913606 4765 scope.go:117] "RemoveContainer" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913908 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} err="failed to get container status \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": rpc error: code = NotFound desc = could not find container \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": container with ID starting with 422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.913935 4765 scope.go:117] "RemoveContainer" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.914211 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} err="failed to get container status \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": rpc error: code = NotFound desc = could not find container \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": container with ID starting with 42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.914248 4765 scope.go:117] "RemoveContainer" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.914671 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} err="failed to get container status \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": rpc error: code = NotFound desc = could not find container \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": container with ID starting with aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.914699 4765 scope.go:117] "RemoveContainer" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.914995 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} err="failed to get container status \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": rpc error: code = NotFound desc = could not find container \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": container with ID starting with d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.915022 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.915339 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} err="failed to get container status \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.915369 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.915731 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} err="failed to get container status \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": rpc error: code = NotFound desc = could not find container \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": container with ID starting with 0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.915758 4765 scope.go:117] "RemoveContainer" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916020 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} err="failed to get container status \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": rpc error: code = NotFound desc = could not find container \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": container with ID starting with f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916054 4765 scope.go:117] "RemoveContainer" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916339 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} err="failed to get container status \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": rpc error: code = NotFound desc = could not find container \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": container with ID starting with c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916379 4765 scope.go:117] "RemoveContainer" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916646 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} err="failed to get container status \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": rpc error: code = NotFound desc = could not find container \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": container with ID starting with 5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.916679 4765 scope.go:117] "RemoveContainer" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917041 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} err="failed to get container status \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": rpc error: code = NotFound desc = could not find container \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": container with ID starting with d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917079 4765 scope.go:117] "RemoveContainer" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917419 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} err="failed to get container status \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": rpc error: code = NotFound desc = could not find container \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": container with ID starting with 422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917448 4765 scope.go:117] "RemoveContainer" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917726 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} err="failed to get container status \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": rpc error: code = NotFound desc = could not find container \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": container with ID starting with 42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.917748 4765 scope.go:117] "RemoveContainer" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918021 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} err="failed to get container status \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": rpc error: code = NotFound desc = could not find container \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": container with ID starting with aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918072 4765 scope.go:117] "RemoveContainer" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918460 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} err="failed to get container status \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": rpc error: code = NotFound desc = could not find container \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": container with ID starting with d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918497 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918753 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} err="failed to get container status \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.918776 4765 scope.go:117] "RemoveContainer" containerID="0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919110 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183"} err="failed to get container status \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": rpc error: code = NotFound desc = could not find container \"0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183\": container with ID starting with 0135d353fd46cce1e9adfbfed8942a7ab84070067d8c4924a83809eccdda6183 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919135 4765 scope.go:117] "RemoveContainer" containerID="f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919547 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c"} err="failed to get container status \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": rpc error: code = NotFound desc = could not find container \"f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c\": container with ID starting with f9b29167e342c57a67a79b392dec8d06980213090f8daf88869bbb15882f6a3c not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919575 4765 scope.go:117] "RemoveContainer" containerID="c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919856 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693"} err="failed to get container status \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": rpc error: code = NotFound desc = could not find container \"c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693\": container with ID starting with c03880ff06d8898d4f634d07fa0c7d10dbf42eb6910ba63a7ab9697f1a57a693 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.919876 4765 scope.go:117] "RemoveContainer" containerID="5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.920163 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc"} err="failed to get container status \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": rpc error: code = NotFound desc = could not find container \"5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc\": container with ID starting with 5310e5208f70fa2790c01665415c7f66443e1612bdc613ed96926d0e430991fc not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.920188 4765 scope.go:117] "RemoveContainer" containerID="d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.920746 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96"} err="failed to get container status \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": rpc error: code = NotFound desc = could not find container \"d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96\": container with ID starting with d02719447b7be5c6639f019b9dafdb63a938a7cceea602dcf12d3302b1671d96 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.920776 4765 scope.go:117] "RemoveContainer" containerID="422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.921118 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae"} err="failed to get container status \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": rpc error: code = NotFound desc = could not find container \"422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae\": container with ID starting with 422342d05b14813bf7a163c2d8fa7a52ee20ebc838611f7fbe3225fdce99adae not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.921138 4765 scope.go:117] "RemoveContainer" containerID="42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.921452 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000"} err="failed to get container status \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": rpc error: code = NotFound desc = could not find container \"42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000\": container with ID starting with 42d014dc9aa6b7d4163183ee174af36ae7a1b4a595b7c220131dbc8a00a90000 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.921473 4765 scope.go:117] "RemoveContainer" containerID="aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.921979 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203"} err="failed to get container status \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": rpc error: code = NotFound desc = could not find container \"aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203\": container with ID starting with aac34efe8030774290070813c65352a3a5bc33df3cf49ed0541dd6ba2f375203 not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.922016 4765 scope.go:117] "RemoveContainer" containerID="d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.922422 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e"} err="failed to get container status \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": rpc error: code = NotFound desc = could not find container \"d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e\": container with ID starting with d58d852772fc3627f87561617c6ba4b237c0b134c283e18fb14f337c001a4d5e not found: ID does not exist" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.922448 4765 scope.go:117] "RemoveContainer" containerID="578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a" Dec 03 20:48:07 crc kubenswrapper[4765]: I1203 20:48:07.922824 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a"} err="failed to get container status \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": rpc error: code = NotFound desc = could not find container \"578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a\": container with ID starting with 578054e38cbd7b183e008eab873945a03dd14b95a1bbfd2f4adfb39417e04a7a not found: ID does not exist" Dec 03 20:48:08 crc kubenswrapper[4765]: I1203 20:48:08.371930 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2eb102-7abd-48ad-8287-ab7d2d8a4166" path="/var/lib/kubelet/pods/ad2eb102-7abd-48ad-8287-ab7d2d8a4166/volumes" Dec 03 20:48:08 crc kubenswrapper[4765]: I1203 20:48:08.674836 4765 generic.go:334] "Generic (PLEG): container finished" podID="cc7d707d-839d-454b-af0d-1ae93c2af2e4" containerID="f428b5b7c734a93485e26e9ea033908e4e69ef1b4e47f70f4b55b790dd0890af" exitCode=0 Dec 03 20:48:08 crc kubenswrapper[4765]: I1203 20:48:08.674896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerDied","Data":"f428b5b7c734a93485e26e9ea033908e4e69ef1b4e47f70f4b55b790dd0890af"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687462 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"bcce7b37a9eb1f9124f17b31abe16ee3e0245625937011b94549b993d9de25e3"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687840 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"aa132a8827770a54a9026970f02c6e4eeab8ea6e38aa197ee1d44776088b655e"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687861 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"24b0972f5b507eca808593d1183ece107c8d61522d9ec249f6be4cdde8366a66"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"5e7e32e1cdd6b2ba5087fd074534202db209818252e14e1436830f7eede15b86"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"5ee848d64500d5fd2104e3a09ed9151285bb18779ad608af580d28403afda1dd"} Dec 03 20:48:09 crc kubenswrapper[4765]: I1203 20:48:09.687917 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"75382fd97ba58608d7758edf80d4d08ea721b7cec5042746510b45289a1258ac"} Dec 03 20:48:12 crc kubenswrapper[4765]: I1203 20:48:12.708468 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"971775dea4f6409b0584f5bac2ccee3e6ad12a1cc5a26aec90b41e324fb8270c"} Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.724332 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" event={"ID":"cc7d707d-839d-454b-af0d-1ae93c2af2e4","Type":"ContainerStarted","Data":"109fafa3ea32768d035dbb526d48a4741d2c37855d8d3bee3030e0ba042fb952"} Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.724753 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.724775 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.724789 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.751442 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.753161 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:14 crc kubenswrapper[4765]: I1203 20:48:14.767147 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" podStartSLOduration=7.767128512 podStartE2EDuration="7.767128512s" podCreationTimestamp="2025-12-03 20:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:48:14.766722382 +0000 UTC m=+592.697267543" watchObservedRunningTime="2025-12-03 20:48:14.767128512 +0000 UTC m=+592.697673683" Dec 03 20:48:20 crc kubenswrapper[4765]: I1203 20:48:20.361142 4765 scope.go:117] "RemoveContainer" containerID="9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da" Dec 03 20:48:20 crc kubenswrapper[4765]: E1203 20:48:20.362160 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-p9xkg_openshift-multus(2d91ef96-b0c9-43eb-8d49-e522199942c9)\"" pod="openshift-multus/multus-p9xkg" podUID="2d91ef96-b0c9-43eb-8d49-e522199942c9" Dec 03 20:48:22 crc kubenswrapper[4765]: I1203 20:48:22.682075 4765 scope.go:117] "RemoveContainer" containerID="0379281ab2baa616c86cbc1448c8200f9571865e5a5ce0151cae540fec35d0e1" Dec 03 20:48:23 crc kubenswrapper[4765]: I1203 20:48:23.805975 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/2.log" Dec 03 20:48:24 crc kubenswrapper[4765]: I1203 20:48:24.799086 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:48:24 crc kubenswrapper[4765]: I1203 20:48:24.799181 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:48:32 crc kubenswrapper[4765]: I1203 20:48:32.366142 4765 scope.go:117] "RemoveContainer" containerID="9f4e50c7a9c77e4ad4801f69011139f8b6f6169501e5a6e2b884c6fceecfa5da" Dec 03 20:48:32 crc kubenswrapper[4765]: I1203 20:48:32.870235 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p9xkg_2d91ef96-b0c9-43eb-8d49-e522199942c9/kube-multus/2.log" Dec 03 20:48:32 crc kubenswrapper[4765]: I1203 20:48:32.871028 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p9xkg" event={"ID":"2d91ef96-b0c9-43eb-8d49-e522199942c9","Type":"ContainerStarted","Data":"640d0da6fe15aec5e9f43f693ea30254a7b677d7d4b446f5b700dd38c86c471a"} Dec 03 20:48:37 crc kubenswrapper[4765]: I1203 20:48:37.599484 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7bzj8" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.289975 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj"] Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.291664 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.294644 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.306337 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj"] Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.347864 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5clv\" (UniqueName: \"kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.347958 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.348035 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.448999 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.449137 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5clv\" (UniqueName: \"kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.449211 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.450545 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.450615 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.483791 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5clv\" (UniqueName: \"kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.609510 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.858035 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj"] Dec 03 20:48:47 crc kubenswrapper[4765]: I1203 20:48:47.968127 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" event={"ID":"3720cd85-f431-48f8-8914-2c4196029b6f","Type":"ContainerStarted","Data":"a65e9f5e4c8a5efda1655164812712ea734df824e5364c43069c74e81ae90712"} Dec 03 20:48:48 crc kubenswrapper[4765]: I1203 20:48:48.977120 4765 generic.go:334] "Generic (PLEG): container finished" podID="3720cd85-f431-48f8-8914-2c4196029b6f" containerID="f1c2f6a74c873dc6e2daf9ed7dadbe9a1ef18ee3b0fb9fc6101af6342e009b95" exitCode=0 Dec 03 20:48:48 crc kubenswrapper[4765]: I1203 20:48:48.977368 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" event={"ID":"3720cd85-f431-48f8-8914-2c4196029b6f","Type":"ContainerDied","Data":"f1c2f6a74c873dc6e2daf9ed7dadbe9a1ef18ee3b0fb9fc6101af6342e009b95"} Dec 03 20:48:50 crc kubenswrapper[4765]: I1203 20:48:50.999343 4765 generic.go:334] "Generic (PLEG): container finished" podID="3720cd85-f431-48f8-8914-2c4196029b6f" containerID="607990d59af8967a5a9d0d0926ee1a12c2273cd2bdb15adfea83694e220e7425" exitCode=0 Dec 03 20:48:50 crc kubenswrapper[4765]: I1203 20:48:50.999658 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" event={"ID":"3720cd85-f431-48f8-8914-2c4196029b6f","Type":"ContainerDied","Data":"607990d59af8967a5a9d0d0926ee1a12c2273cd2bdb15adfea83694e220e7425"} Dec 03 20:48:52 crc kubenswrapper[4765]: I1203 20:48:52.010116 4765 generic.go:334] "Generic (PLEG): container finished" podID="3720cd85-f431-48f8-8914-2c4196029b6f" containerID="a6922df60a9bee9079345f367aed7e00e11c4f6c11164fa1bda54b112700a6c8" exitCode=0 Dec 03 20:48:52 crc kubenswrapper[4765]: I1203 20:48:52.010171 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" event={"ID":"3720cd85-f431-48f8-8914-2c4196029b6f","Type":"ContainerDied","Data":"a6922df60a9bee9079345f367aed7e00e11c4f6c11164fa1bda54b112700a6c8"} Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.325039 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.421348 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5clv\" (UniqueName: \"kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv\") pod \"3720cd85-f431-48f8-8914-2c4196029b6f\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.421444 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle\") pod \"3720cd85-f431-48f8-8914-2c4196029b6f\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.421470 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util\") pod \"3720cd85-f431-48f8-8914-2c4196029b6f\" (UID: \"3720cd85-f431-48f8-8914-2c4196029b6f\") " Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.422655 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle" (OuterVolumeSpecName: "bundle") pod "3720cd85-f431-48f8-8914-2c4196029b6f" (UID: "3720cd85-f431-48f8-8914-2c4196029b6f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.427725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv" (OuterVolumeSpecName: "kube-api-access-h5clv") pod "3720cd85-f431-48f8-8914-2c4196029b6f" (UID: "3720cd85-f431-48f8-8914-2c4196029b6f"). InnerVolumeSpecName "kube-api-access-h5clv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.458859 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util" (OuterVolumeSpecName: "util") pod "3720cd85-f431-48f8-8914-2c4196029b6f" (UID: "3720cd85-f431-48f8-8914-2c4196029b6f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.523597 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5clv\" (UniqueName: \"kubernetes.io/projected/3720cd85-f431-48f8-8914-2c4196029b6f-kube-api-access-h5clv\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.523656 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:53 crc kubenswrapper[4765]: I1203 20:48:53.523675 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3720cd85-f431-48f8-8914-2c4196029b6f-util\") on node \"crc\" DevicePath \"\"" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.028651 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" event={"ID":"3720cd85-f431-48f8-8914-2c4196029b6f","Type":"ContainerDied","Data":"a65e9f5e4c8a5efda1655164812712ea734df824e5364c43069c74e81ae90712"} Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.028713 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65e9f5e4c8a5efda1655164812712ea734df824e5364c43069c74e81ae90712" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.028773 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.798795 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.798918 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.799011 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.800091 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:48:54 crc kubenswrapper[4765]: I1203 20:48:54.800237 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe" gracePeriod=600 Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.905650 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx"] Dec 03 20:48:55 crc kubenswrapper[4765]: E1203 20:48:55.907011 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="util" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.907079 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="util" Dec 03 20:48:55 crc kubenswrapper[4765]: E1203 20:48:55.907135 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="extract" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.907224 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="extract" Dec 03 20:48:55 crc kubenswrapper[4765]: E1203 20:48:55.907280 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="pull" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.907410 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="pull" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.907563 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3720cd85-f431-48f8-8914-2c4196029b6f" containerName="extract" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.907986 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.912366 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.913574 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx"] Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.914080 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.917229 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sfr9f" Dec 03 20:48:55 crc kubenswrapper[4765]: I1203 20:48:55.960039 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vk5c\" (UniqueName: \"kubernetes.io/projected/cd09d44f-8050-4a97-a4e9-73ec54239864-kube-api-access-4vk5c\") pod \"nmstate-operator-5b5b58f5c8-2dfvx\" (UID: \"cd09d44f-8050-4a97-a4e9-73ec54239864\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.042197 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe" exitCode=0 Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.042612 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe"} Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.042645 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33"} Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.042667 4765 scope.go:117] "RemoveContainer" containerID="b19116da5be129719dfdfb13c9574fb7c5ab6b2a3fea2e9387b43a4a284660ec" Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.061089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vk5c\" (UniqueName: \"kubernetes.io/projected/cd09d44f-8050-4a97-a4e9-73ec54239864-kube-api-access-4vk5c\") pod \"nmstate-operator-5b5b58f5c8-2dfvx\" (UID: \"cd09d44f-8050-4a97-a4e9-73ec54239864\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.086157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vk5c\" (UniqueName: \"kubernetes.io/projected/cd09d44f-8050-4a97-a4e9-73ec54239864-kube-api-access-4vk5c\") pod \"nmstate-operator-5b5b58f5c8-2dfvx\" (UID: \"cd09d44f-8050-4a97-a4e9-73ec54239864\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.233011 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" Dec 03 20:48:56 crc kubenswrapper[4765]: I1203 20:48:56.484924 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx"] Dec 03 20:48:56 crc kubenswrapper[4765]: W1203 20:48:56.495479 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd09d44f_8050_4a97_a4e9_73ec54239864.slice/crio-778b8a141408b04a99c795cd9b53e6eb9ed425973f929f637e799eef7f09f55d WatchSource:0}: Error finding container 778b8a141408b04a99c795cd9b53e6eb9ed425973f929f637e799eef7f09f55d: Status 404 returned error can't find the container with id 778b8a141408b04a99c795cd9b53e6eb9ed425973f929f637e799eef7f09f55d Dec 03 20:48:57 crc kubenswrapper[4765]: I1203 20:48:57.053795 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" event={"ID":"cd09d44f-8050-4a97-a4e9-73ec54239864","Type":"ContainerStarted","Data":"778b8a141408b04a99c795cd9b53e6eb9ed425973f929f637e799eef7f09f55d"} Dec 03 20:48:59 crc kubenswrapper[4765]: I1203 20:48:59.070763 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" event={"ID":"cd09d44f-8050-4a97-a4e9-73ec54239864","Type":"ContainerStarted","Data":"ebc40194909cf220ef4e97d10f774a3af4ed4d926392ff1638dbec5ebe4404a0"} Dec 03 20:48:59 crc kubenswrapper[4765]: I1203 20:48:59.091416 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2dfvx" podStartSLOduration=2.151477089 podStartE2EDuration="4.09139435s" podCreationTimestamp="2025-12-03 20:48:55 +0000 UTC" firstStartedPulling="2025-12-03 20:48:56.498725161 +0000 UTC m=+634.429270312" lastFinishedPulling="2025-12-03 20:48:58.438642422 +0000 UTC m=+636.369187573" observedRunningTime="2025-12-03 20:48:59.087099676 +0000 UTC m=+637.017644827" watchObservedRunningTime="2025-12-03 20:48:59.09139435 +0000 UTC m=+637.021939521" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.042819 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.044342 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.046272 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rrpkw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.053063 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.053829 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.059580 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.059929 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.083814 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-w2v9s"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.086657 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.100599 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.112562 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqdx\" (UniqueName: \"kubernetes.io/projected/d51bdd1a-e635-4ebb-863b-aaa822deb666-kube-api-access-2zqdx\") pod \"nmstate-metrics-7f946cbc9-jdsgs\" (UID: \"d51bdd1a-e635-4ebb-863b-aaa822deb666\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.112845 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-dbus-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.113048 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-nmstate-lock\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.113187 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-ovs-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.113348 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.113440 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgph7\" (UniqueName: \"kubernetes.io/projected/a026c029-77f9-4020-8c0f-6655cbc1dcb6-kube-api-access-zgph7\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.113542 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnnb\" (UniqueName: \"kubernetes.io/projected/86af5345-1169-4a47-8f7c-215533b0d752-kube-api-access-sfnnb\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.184948 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.185588 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.187655 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.187679 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-xqfrc" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.187836 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.209507 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.214599 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnnb\" (UniqueName: \"kubernetes.io/projected/86af5345-1169-4a47-8f7c-215533b0d752-kube-api-access-sfnnb\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.214813 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.214924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqdx\" (UniqueName: \"kubernetes.io/projected/d51bdd1a-e635-4ebb-863b-aaa822deb666-kube-api-access-2zqdx\") pod \"nmstate-metrics-7f946cbc9-jdsgs\" (UID: \"d51bdd1a-e635-4ebb-863b-aaa822deb666\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215048 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-dbus-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flm7\" (UniqueName: \"kubernetes.io/projected/a66f7626-aad6-4d61-91e8-b764b50c5e0b-kube-api-access-8flm7\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215234 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-nmstate-lock\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215352 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-ovs-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215280 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-nmstate-lock\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215242 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-dbus-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.215433 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a026c029-77f9-4020-8c0f-6655cbc1dcb6-ovs-socket\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.217519 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.217616 4765 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.217669 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair podName:86af5345-1169-4a47-8f7c-215533b0d752 nodeName:}" failed. No retries permitted until 2025-12-03 20:49:00.71765072 +0000 UTC m=+638.648195871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-jlplw" (UID: "86af5345-1169-4a47-8f7c-215533b0d752") : secret "openshift-nmstate-webhook" not found Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.218065 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgph7\" (UniqueName: \"kubernetes.io/projected/a026c029-77f9-4020-8c0f-6655cbc1dcb6-kube-api-access-zgph7\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.218209 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a66f7626-aad6-4d61-91e8-b764b50c5e0b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.233023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqdx\" (UniqueName: \"kubernetes.io/projected/d51bdd1a-e635-4ebb-863b-aaa822deb666-kube-api-access-2zqdx\") pod \"nmstate-metrics-7f946cbc9-jdsgs\" (UID: \"d51bdd1a-e635-4ebb-863b-aaa822deb666\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.233854 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnnb\" (UniqueName: \"kubernetes.io/projected/86af5345-1169-4a47-8f7c-215533b0d752-kube-api-access-sfnnb\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.244777 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgph7\" (UniqueName: \"kubernetes.io/projected/a026c029-77f9-4020-8c0f-6655cbc1dcb6-kube-api-access-zgph7\") pod \"nmstate-handler-w2v9s\" (UID: \"a026c029-77f9-4020-8c0f-6655cbc1dcb6\") " pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.319832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.319895 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flm7\" (UniqueName: \"kubernetes.io/projected/a66f7626-aad6-4d61-91e8-b764b50c5e0b-kube-api-access-8flm7\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.319966 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a66f7626-aad6-4d61-91e8-b764b50c5e0b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.320966 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a66f7626-aad6-4d61-91e8-b764b50c5e0b-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.321048 4765 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.321089 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert podName:a66f7626-aad6-4d61-91e8-b764b50c5e0b nodeName:}" failed. No retries permitted until 2025-12-03 20:49:00.821076838 +0000 UTC m=+638.751621989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-fd5mm" (UID: "a66f7626-aad6-4d61-91e8-b764b50c5e0b") : secret "plugin-serving-cert" not found Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.339181 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flm7\" (UniqueName: \"kubernetes.io/projected/a66f7626-aad6-4d61-91e8-b764b50c5e0b-kube-api-access-8flm7\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.377790 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.384721 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67c7449f96-7h4sh"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.385365 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.405217 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67c7449f96-7h4sh"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.423044 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424036 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-trusted-ca-bundle\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424157 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424194 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878gp\" (UniqueName: \"kubernetes.io/projected/81692b3d-3bdf-49a7-b434-fe3b6a07da87-kube-api-access-878gp\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424225 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-service-ca\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424271 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-oauth-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424375 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.424410 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-oauth-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: W1203 20:49:00.459520 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda026c029_77f9_4020_8c0f_6655cbc1dcb6.slice/crio-de6954e3b94ebfa61e5dcf3649ad09f3736232ab009f9b2f75cc81732b66dabb WatchSource:0}: Error finding container de6954e3b94ebfa61e5dcf3649ad09f3736232ab009f9b2f75cc81732b66dabb: Status 404 returned error can't find the container with id de6954e3b94ebfa61e5dcf3649ad09f3736232ab009f9b2f75cc81732b66dabb Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525561 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-oauth-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525872 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-oauth-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-trusted-ca-bundle\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525958 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.525979 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878gp\" (UniqueName: \"kubernetes.io/projected/81692b3d-3bdf-49a7-b434-fe3b6a07da87-kube-api-access-878gp\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.526001 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-service-ca\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.526867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-service-ca\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.529364 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-oauth-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.530111 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.530371 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81692b3d-3bdf-49a7-b434-fe3b6a07da87-trusted-ca-bundle\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.532999 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-oauth-config\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.533051 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81692b3d-3bdf-49a7-b434-fe3b6a07da87-console-serving-cert\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.547605 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878gp\" (UniqueName: \"kubernetes.io/projected/81692b3d-3bdf-49a7-b434-fe3b6a07da87-kube-api-access-878gp\") pod \"console-67c7449f96-7h4sh\" (UID: \"81692b3d-3bdf-49a7-b434-fe3b6a07da87\") " pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.665702 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs"] Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.728574 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.732273 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/86af5345-1169-4a47-8f7c-215533b0d752-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-jlplw\" (UID: \"86af5345-1169-4a47-8f7c-215533b0d752\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.757365 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.829759 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.829982 4765 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 03 20:49:00 crc kubenswrapper[4765]: E1203 20:49:00.830031 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert podName:a66f7626-aad6-4d61-91e8-b764b50c5e0b nodeName:}" failed. No retries permitted until 2025-12-03 20:49:01.830016629 +0000 UTC m=+639.760561780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-fd5mm" (UID: "a66f7626-aad6-4d61-91e8-b764b50c5e0b") : secret "plugin-serving-cert" not found Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.950977 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67c7449f96-7h4sh"] Dec 03 20:49:00 crc kubenswrapper[4765]: W1203 20:49:00.956344 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81692b3d_3bdf_49a7_b434_fe3b6a07da87.slice/crio-d064ef74f2249aac465857e10b9c9d05187a088d3c627213a8999e3025d2145a WatchSource:0}: Error finding container d064ef74f2249aac465857e10b9c9d05187a088d3c627213a8999e3025d2145a: Status 404 returned error can't find the container with id d064ef74f2249aac465857e10b9c9d05187a088d3c627213a8999e3025d2145a Dec 03 20:49:00 crc kubenswrapper[4765]: I1203 20:49:00.988360 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.091464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67c7449f96-7h4sh" event={"ID":"81692b3d-3bdf-49a7-b434-fe3b6a07da87","Type":"ContainerStarted","Data":"d064ef74f2249aac465857e10b9c9d05187a088d3c627213a8999e3025d2145a"} Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.092476 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" event={"ID":"d51bdd1a-e635-4ebb-863b-aaa822deb666","Type":"ContainerStarted","Data":"c22febb7444842ac392ec2ae72b492fc6bce04aab1b50d5c94d5be116c7f684a"} Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.093463 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w2v9s" event={"ID":"a026c029-77f9-4020-8c0f-6655cbc1dcb6","Type":"ContainerStarted","Data":"de6954e3b94ebfa61e5dcf3649ad09f3736232ab009f9b2f75cc81732b66dabb"} Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.391478 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw"] Dec 03 20:49:01 crc kubenswrapper[4765]: W1203 20:49:01.392229 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86af5345_1169_4a47_8f7c_215533b0d752.slice/crio-b17e772a4adb1437d2d159a4b11e8d1962cffc7a8e4776b34818d5e9e14a3327 WatchSource:0}: Error finding container b17e772a4adb1437d2d159a4b11e8d1962cffc7a8e4776b34818d5e9e14a3327: Status 404 returned error can't find the container with id b17e772a4adb1437d2d159a4b11e8d1962cffc7a8e4776b34818d5e9e14a3327 Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.846291 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:01 crc kubenswrapper[4765]: I1203 20:49:01.852775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a66f7626-aad6-4d61-91e8-b764b50c5e0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-fd5mm\" (UID: \"a66f7626-aad6-4d61-91e8-b764b50c5e0b\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:02 crc kubenswrapper[4765]: I1203 20:49:02.000078 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" Dec 03 20:49:02 crc kubenswrapper[4765]: I1203 20:49:02.099935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" event={"ID":"86af5345-1169-4a47-8f7c-215533b0d752","Type":"ContainerStarted","Data":"b17e772a4adb1437d2d159a4b11e8d1962cffc7a8e4776b34818d5e9e14a3327"} Dec 03 20:49:02 crc kubenswrapper[4765]: I1203 20:49:02.101311 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67c7449f96-7h4sh" event={"ID":"81692b3d-3bdf-49a7-b434-fe3b6a07da87","Type":"ContainerStarted","Data":"6ddbacd70271344b8d1e9f8f36cc26d64ccb961db4b55e5fb0ff64f27f3f2cca"} Dec 03 20:49:02 crc kubenswrapper[4765]: I1203 20:49:02.120803 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67c7449f96-7h4sh" podStartSLOduration=2.120786038 podStartE2EDuration="2.120786038s" podCreationTimestamp="2025-12-03 20:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:49:02.119101187 +0000 UTC m=+640.049646338" watchObservedRunningTime="2025-12-03 20:49:02.120786038 +0000 UTC m=+640.051331189" Dec 03 20:49:02 crc kubenswrapper[4765]: I1203 20:49:02.755680 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm"] Dec 03 20:49:02 crc kubenswrapper[4765]: W1203 20:49:02.773441 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda66f7626_aad6_4d61_91e8_b764b50c5e0b.slice/crio-ae9ca745f006ca2069a757ab5f3d132f755335dd6c1a7538bed4a115e4780251 WatchSource:0}: Error finding container ae9ca745f006ca2069a757ab5f3d132f755335dd6c1a7538bed4a115e4780251: Status 404 returned error can't find the container with id ae9ca745f006ca2069a757ab5f3d132f755335dd6c1a7538bed4a115e4780251 Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.113626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-w2v9s" event={"ID":"a026c029-77f9-4020-8c0f-6655cbc1dcb6","Type":"ContainerStarted","Data":"4d7f90a626709605673e7e8f06f61bd8ad5f0325a55fb117147a453dd164b44b"} Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.113813 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.121110 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" event={"ID":"86af5345-1169-4a47-8f7c-215533b0d752","Type":"ContainerStarted","Data":"73c60a65aba21cc02ec545d63e18fa76fdd6c7e64c59a236ee541d85afc8cb12"} Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.121830 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.127205 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" event={"ID":"a66f7626-aad6-4d61-91e8-b764b50c5e0b","Type":"ContainerStarted","Data":"ae9ca745f006ca2069a757ab5f3d132f755335dd6c1a7538bed4a115e4780251"} Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.132648 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" event={"ID":"d51bdd1a-e635-4ebb-863b-aaa822deb666","Type":"ContainerStarted","Data":"31155c22ae946d95169152e4fb4e465fc2d806406baddde89701e6b1ae17d9cf"} Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.160171 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-w2v9s" podStartSLOduration=1.053741454 podStartE2EDuration="3.160144931s" podCreationTimestamp="2025-12-03 20:49:00 +0000 UTC" firstStartedPulling="2025-12-03 20:49:00.465154942 +0000 UTC m=+638.395700093" lastFinishedPulling="2025-12-03 20:49:02.571558379 +0000 UTC m=+640.502103570" observedRunningTime="2025-12-03 20:49:03.14851806 +0000 UTC m=+641.079063361" watchObservedRunningTime="2025-12-03 20:49:03.160144931 +0000 UTC m=+641.090690112" Dec 03 20:49:03 crc kubenswrapper[4765]: I1203 20:49:03.179147 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" podStartSLOduration=1.987102728 podStartE2EDuration="3.179126382s" podCreationTimestamp="2025-12-03 20:49:00 +0000 UTC" firstStartedPulling="2025-12-03 20:49:01.396775913 +0000 UTC m=+639.327321104" lastFinishedPulling="2025-12-03 20:49:02.588799607 +0000 UTC m=+640.519344758" observedRunningTime="2025-12-03 20:49:03.174504369 +0000 UTC m=+641.105049530" watchObservedRunningTime="2025-12-03 20:49:03.179126382 +0000 UTC m=+641.109671543" Dec 03 20:49:05 crc kubenswrapper[4765]: I1203 20:49:05.146481 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" event={"ID":"d51bdd1a-e635-4ebb-863b-aaa822deb666","Type":"ContainerStarted","Data":"9c6b3c498824cb2416c95b044514f44857e9deb4eae82338ad6009801a71dfb3"} Dec 03 20:49:07 crc kubenswrapper[4765]: I1203 20:49:07.162875 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" event={"ID":"a66f7626-aad6-4d61-91e8-b764b50c5e0b","Type":"ContainerStarted","Data":"931d68ed3d2e89c2644725a7546f8750f356f38b96130a4f0fcc2ee0d3d8cf48"} Dec 03 20:49:07 crc kubenswrapper[4765]: I1203 20:49:07.184932 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-fd5mm" podStartSLOduration=3.55557227 podStartE2EDuration="7.184910807s" podCreationTimestamp="2025-12-03 20:49:00 +0000 UTC" firstStartedPulling="2025-12-03 20:49:02.782039483 +0000 UTC m=+640.712584634" lastFinishedPulling="2025-12-03 20:49:06.41137802 +0000 UTC m=+644.341923171" observedRunningTime="2025-12-03 20:49:07.184224289 +0000 UTC m=+645.114769440" watchObservedRunningTime="2025-12-03 20:49:07.184910807 +0000 UTC m=+645.115455958" Dec 03 20:49:07 crc kubenswrapper[4765]: I1203 20:49:07.186744 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdsgs" podStartSLOduration=3.097762899 podStartE2EDuration="7.18673775s" podCreationTimestamp="2025-12-03 20:49:00 +0000 UTC" firstStartedPulling="2025-12-03 20:49:00.671721561 +0000 UTC m=+638.602266712" lastFinishedPulling="2025-12-03 20:49:04.760696412 +0000 UTC m=+642.691241563" observedRunningTime="2025-12-03 20:49:05.169534446 +0000 UTC m=+643.100079637" watchObservedRunningTime="2025-12-03 20:49:07.18673775 +0000 UTC m=+645.117282901" Dec 03 20:49:10 crc kubenswrapper[4765]: I1203 20:49:10.446905 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-w2v9s" Dec 03 20:49:10 crc kubenswrapper[4765]: I1203 20:49:10.758363 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:10 crc kubenswrapper[4765]: I1203 20:49:10.758409 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:10 crc kubenswrapper[4765]: I1203 20:49:10.763385 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:11 crc kubenswrapper[4765]: I1203 20:49:11.199172 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67c7449f96-7h4sh" Dec 03 20:49:11 crc kubenswrapper[4765]: I1203 20:49:11.302156 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:49:20 crc kubenswrapper[4765]: I1203 20:49:20.995145 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-jlplw" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.360825 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj"] Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.362754 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.364799 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.387136 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj"] Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.492740 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.492906 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.493023 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgfp\" (UniqueName: \"kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.594068 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgfp\" (UniqueName: \"kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.594222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.594266 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.595022 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.595189 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.630101 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgfp\" (UniqueName: \"kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.702336 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:35 crc kubenswrapper[4765]: I1203 20:49:35.967617 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj"] Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.339253 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-stgcm" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" containerName="console" containerID="cri-o://402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31" gracePeriod=15 Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.391422 4765 generic.go:334] "Generic (PLEG): container finished" podID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerID="69ed50b3363d64c263396d53c1f21224e6e0a28d76944d59c777a947aa397155" exitCode=0 Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.391490 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" event={"ID":"56131f70-b87d-4e36-a680-eab8d3bbee72","Type":"ContainerDied","Data":"69ed50b3363d64c263396d53c1f21224e6e0a28d76944d59c777a947aa397155"} Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.391523 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" event={"ID":"56131f70-b87d-4e36-a680-eab8d3bbee72","Type":"ContainerStarted","Data":"c6d59b176c25b2151e842c42d0a1e0fc688d5f44eb45d31160fbf843f4fc614e"} Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.777424 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-stgcm_de590c28-833f-4c0b-9184-62a37519a9e0/console/0.log" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.777496 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916760 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916818 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916842 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916877 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916952 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczh5\" (UniqueName: \"kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.916988 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.917039 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert\") pod \"de590c28-833f-4c0b-9184-62a37519a9e0\" (UID: \"de590c28-833f-4c0b-9184-62a37519a9e0\") " Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.917890 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config" (OuterVolumeSpecName: "console-config") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.917904 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.918157 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca" (OuterVolumeSpecName: "service-ca") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.918657 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.923762 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.924429 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:49:36 crc kubenswrapper[4765]: I1203 20:49:36.925003 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5" (OuterVolumeSpecName: "kube-api-access-kczh5") pod "de590c28-833f-4c0b-9184-62a37519a9e0" (UID: "de590c28-833f-4c0b-9184-62a37519a9e0"). InnerVolumeSpecName "kube-api-access-kczh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018606 4765 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018645 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018660 4765 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018674 4765 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018688 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kczh5\" (UniqueName: \"kubernetes.io/projected/de590c28-833f-4c0b-9184-62a37519a9e0-kube-api-access-kczh5\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018702 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de590c28-833f-4c0b-9184-62a37519a9e0-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.018715 4765 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de590c28-833f-4c0b-9184-62a37519a9e0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402087 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-stgcm_de590c28-833f-4c0b-9184-62a37519a9e0/console/0.log" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402155 4765 generic.go:334] "Generic (PLEG): container finished" podID="de590c28-833f-4c0b-9184-62a37519a9e0" containerID="402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31" exitCode=2 Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402195 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-stgcm" event={"ID":"de590c28-833f-4c0b-9184-62a37519a9e0","Type":"ContainerDied","Data":"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31"} Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-stgcm" event={"ID":"de590c28-833f-4c0b-9184-62a37519a9e0","Type":"ContainerDied","Data":"9ace2f839b02af642e5a57975d8971f4197113ac9a05e65668da66e2bf8fb466"} Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402259 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-stgcm" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.402268 4765 scope.go:117] "RemoveContainer" containerID="402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.423926 4765 scope.go:117] "RemoveContainer" containerID="402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31" Dec 03 20:49:37 crc kubenswrapper[4765]: E1203 20:49:37.424430 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31\": container with ID starting with 402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31 not found: ID does not exist" containerID="402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.424719 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31"} err="failed to get container status \"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31\": rpc error: code = NotFound desc = could not find container \"402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31\": container with ID starting with 402eb41b807aee101285e824d384ba38ce8f92a9a3d30d0d80f4a4f308d3fb31 not found: ID does not exist" Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.464561 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:49:37 crc kubenswrapper[4765]: I1203 20:49:37.469621 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-stgcm"] Dec 03 20:49:38 crc kubenswrapper[4765]: I1203 20:49:38.369542 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" path="/var/lib/kubelet/pods/de590c28-833f-4c0b-9184-62a37519a9e0/volumes" Dec 03 20:49:38 crc kubenswrapper[4765]: I1203 20:49:38.412751 4765 generic.go:334] "Generic (PLEG): container finished" podID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerID="85d5a8d7f310c0c4d7e6ec1fab03dc4c6948442384b6500190b9115b31137764" exitCode=0 Dec 03 20:49:38 crc kubenswrapper[4765]: I1203 20:49:38.412880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" event={"ID":"56131f70-b87d-4e36-a680-eab8d3bbee72","Type":"ContainerDied","Data":"85d5a8d7f310c0c4d7e6ec1fab03dc4c6948442384b6500190b9115b31137764"} Dec 03 20:49:39 crc kubenswrapper[4765]: I1203 20:49:39.426203 4765 generic.go:334] "Generic (PLEG): container finished" podID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerID="8ab4c47cd7630d63459f8b46134631c561b86349e238068a197b6f52fbd2d451" exitCode=0 Dec 03 20:49:39 crc kubenswrapper[4765]: I1203 20:49:39.426254 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" event={"ID":"56131f70-b87d-4e36-a680-eab8d3bbee72","Type":"ContainerDied","Data":"8ab4c47cd7630d63459f8b46134631c561b86349e238068a197b6f52fbd2d451"} Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.770257 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.886589 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util\") pod \"56131f70-b87d-4e36-a680-eab8d3bbee72\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.886641 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgfp\" (UniqueName: \"kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp\") pod \"56131f70-b87d-4e36-a680-eab8d3bbee72\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.886691 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle\") pod \"56131f70-b87d-4e36-a680-eab8d3bbee72\" (UID: \"56131f70-b87d-4e36-a680-eab8d3bbee72\") " Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.887990 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle" (OuterVolumeSpecName: "bundle") pod "56131f70-b87d-4e36-a680-eab8d3bbee72" (UID: "56131f70-b87d-4e36-a680-eab8d3bbee72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.895707 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp" (OuterVolumeSpecName: "kube-api-access-9qgfp") pod "56131f70-b87d-4e36-a680-eab8d3bbee72" (UID: "56131f70-b87d-4e36-a680-eab8d3bbee72"). InnerVolumeSpecName "kube-api-access-9qgfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.900025 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util" (OuterVolumeSpecName: "util") pod "56131f70-b87d-4e36-a680-eab8d3bbee72" (UID: "56131f70-b87d-4e36-a680-eab8d3bbee72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.988452 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-util\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.988495 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qgfp\" (UniqueName: \"kubernetes.io/projected/56131f70-b87d-4e36-a680-eab8d3bbee72-kube-api-access-9qgfp\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:40 crc kubenswrapper[4765]: I1203 20:49:40.988507 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56131f70-b87d-4e36-a680-eab8d3bbee72-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:49:41 crc kubenswrapper[4765]: I1203 20:49:41.447103 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" event={"ID":"56131f70-b87d-4e36-a680-eab8d3bbee72","Type":"ContainerDied","Data":"c6d59b176c25b2151e842c42d0a1e0fc688d5f44eb45d31160fbf843f4fc614e"} Dec 03 20:49:41 crc kubenswrapper[4765]: I1203 20:49:41.447411 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d59b176c25b2151e842c42d0a1e0fc688d5f44eb45d31160fbf843f4fc614e" Dec 03 20:49:41 crc kubenswrapper[4765]: I1203 20:49:41.447181 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.599900 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk"] Dec 03 20:49:52 crc kubenswrapper[4765]: E1203 20:49:52.600501 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="util" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600512 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="util" Dec 03 20:49:52 crc kubenswrapper[4765]: E1203 20:49:52.600522 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="extract" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600527 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="extract" Dec 03 20:49:52 crc kubenswrapper[4765]: E1203 20:49:52.600539 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" containerName="console" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600545 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" containerName="console" Dec 03 20:49:52 crc kubenswrapper[4765]: E1203 20:49:52.600560 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="pull" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600565 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="pull" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600658 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="de590c28-833f-4c0b-9184-62a37519a9e0" containerName="console" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.600666 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="56131f70-b87d-4e36-a680-eab8d3bbee72" containerName="extract" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.601000 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.602778 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.602941 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.603185 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.603296 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.603439 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rdtm6" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.674878 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk"] Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.770649 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjzv\" (UniqueName: \"kubernetes.io/projected/56245235-eef6-472d-b481-1b9d7f80b89c-kube-api-access-9vjzv\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.770935 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-webhook-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.770974 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-apiservice-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.872028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjzv\" (UniqueName: \"kubernetes.io/projected/56245235-eef6-472d-b481-1b9d7f80b89c-kube-api-access-9vjzv\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.872078 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-webhook-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.872111 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-apiservice-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.879433 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-webhook-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.879443 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56245235-eef6-472d-b481-1b9d7f80b89c-apiservice-cert\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.890727 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjzv\" (UniqueName: \"kubernetes.io/projected/56245235-eef6-472d-b481-1b9d7f80b89c-kube-api-access-9vjzv\") pod \"metallb-operator-controller-manager-869886bfd4-t75fk\" (UID: \"56245235-eef6-472d-b481-1b9d7f80b89c\") " pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.936467 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g"] Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.937158 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.940219 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.940279 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.940719 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dkr9t" Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.950708 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g"] Dec 03 20:49:52 crc kubenswrapper[4765]: I1203 20:49:52.979431 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.074016 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsxf\" (UniqueName: \"kubernetes.io/projected/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-kube-api-access-zhsxf\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.074096 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-apiservice-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.074130 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-webhook-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.175215 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhsxf\" (UniqueName: \"kubernetes.io/projected/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-kube-api-access-zhsxf\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.175585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-apiservice-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.175629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-webhook-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.179680 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-webhook-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.187012 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-apiservice-cert\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.187241 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk"] Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.204249 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhsxf\" (UniqueName: \"kubernetes.io/projected/79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a-kube-api-access-zhsxf\") pod \"metallb-operator-webhook-server-7494cc9b6f-4zr8g\" (UID: \"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a\") " pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.255764 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.451287 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g"] Dec 03 20:49:53 crc kubenswrapper[4765]: W1203 20:49:53.459540 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79c4ff63_6cbc_4fd7_9c7f_70bd9004d94a.slice/crio-4f058726383b537af352283bfbc982d951820ab2066fc1fc9cc46881cd8cc98f WatchSource:0}: Error finding container 4f058726383b537af352283bfbc982d951820ab2066fc1fc9cc46881cd8cc98f: Status 404 returned error can't find the container with id 4f058726383b537af352283bfbc982d951820ab2066fc1fc9cc46881cd8cc98f Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.519177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" event={"ID":"56245235-eef6-472d-b481-1b9d7f80b89c","Type":"ContainerStarted","Data":"0c1746aa1f89118b53a023826241dae1b2d31f74868fe81f58e2a791edcc2a5e"} Dec 03 20:49:53 crc kubenswrapper[4765]: I1203 20:49:53.520110 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" event={"ID":"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a","Type":"ContainerStarted","Data":"4f058726383b537af352283bfbc982d951820ab2066fc1fc9cc46881cd8cc98f"} Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.544149 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" event={"ID":"56245235-eef6-472d-b481-1b9d7f80b89c","Type":"ContainerStarted","Data":"61bb81993038e3919f5e7c8450fc404ec532dc84a3f51032bc2353961b040fdc"} Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.545207 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.546977 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" event={"ID":"79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a","Type":"ContainerStarted","Data":"b19243bde2a692a8f17b89e678aeb7e76fe3aafc6a16ae81396072c45180e540"} Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.548069 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.638912 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" podStartSLOduration=1.750525921 podStartE2EDuration="5.638893074s" podCreationTimestamp="2025-12-03 20:49:52 +0000 UTC" firstStartedPulling="2025-12-03 20:49:53.212260952 +0000 UTC m=+691.142806093" lastFinishedPulling="2025-12-03 20:49:57.100628095 +0000 UTC m=+695.031173246" observedRunningTime="2025-12-03 20:49:57.592632071 +0000 UTC m=+695.523177242" watchObservedRunningTime="2025-12-03 20:49:57.638893074 +0000 UTC m=+695.569438235" Dec 03 20:49:57 crc kubenswrapper[4765]: I1203 20:49:57.642826 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" podStartSLOduration=1.9844294439999999 podStartE2EDuration="5.64281198s" podCreationTimestamp="2025-12-03 20:49:52 +0000 UTC" firstStartedPulling="2025-12-03 20:49:53.461675034 +0000 UTC m=+691.392220185" lastFinishedPulling="2025-12-03 20:49:57.12005755 +0000 UTC m=+695.050602721" observedRunningTime="2025-12-03 20:49:57.636969182 +0000 UTC m=+695.567514353" watchObservedRunningTime="2025-12-03 20:49:57.64281198 +0000 UTC m=+695.573357141" Dec 03 20:50:13 crc kubenswrapper[4765]: I1203 20:50:13.261597 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7494cc9b6f-4zr8g" Dec 03 20:50:32 crc kubenswrapper[4765]: I1203 20:50:32.982664 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.697912 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rtzp2"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.700453 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.703816 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.704822 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.705337 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.705572 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6vqbf" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.705689 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.708051 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.717964 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.773215 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qvsp4"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.774064 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: W1203 20:50:33.776928 4765 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: secrets "metallb-memberlist" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.776983 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-memberlist\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 20:50:33 crc kubenswrapper[4765]: W1203 20:50:33.777035 4765 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-8cdtq": failed to list *v1.Secret: secrets "speaker-dockercfg-8cdtq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.777050 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-8cdtq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-dockercfg-8cdtq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 20:50:33 crc kubenswrapper[4765]: W1203 20:50:33.777325 4765 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: secrets "speaker-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.777369 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 20:50:33 crc kubenswrapper[4765]: W1203 20:50:33.777431 4765 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: configmaps "metallb-excludel2" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.777446 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"metallb-excludel2\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.791159 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-n78jt"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.792199 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.794016 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.801934 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-n78jt"] Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812532 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvjk4\" (UniqueName: \"kubernetes.io/projected/fed0fc97-3d14-4716-ad43-4c3bfd606850-kube-api-access-dvjk4\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812807 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-startup\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812952 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.812971 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb485\" (UniqueName: \"kubernetes.io/projected/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-kube-api-access-wb485\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813007 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-sockets\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813059 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-reloader\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813089 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-conf\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813106 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srh7w\" (UniqueName: \"kubernetes.io/projected/d3648e48-1afd-42ec-9aec-4d91958639b9-kube-api-access-srh7w\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813123 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-metrics-certs\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics-certs\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.813164 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914267 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-metrics-certs\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914325 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914366 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb485\" (UniqueName: \"kubernetes.io/projected/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-kube-api-access-wb485\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914392 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-sockets\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914422 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46ww\" (UniqueName: \"kubernetes.io/projected/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-kube-api-access-z46ww\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914448 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-reloader\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914467 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-conf\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914484 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srh7w\" (UniqueName: \"kubernetes.io/projected/d3648e48-1afd-42ec-9aec-4d91958639b9-kube-api-access-srh7w\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914502 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-metrics-certs\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914518 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics-certs\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914533 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914557 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvjk4\" (UniqueName: \"kubernetes.io/projected/fed0fc97-3d14-4716-ad43-4c3bfd606850-kube-api-access-dvjk4\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914581 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-cert\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914628 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-startup\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914642 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.914749 4765 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 03 20:50:33 crc kubenswrapper[4765]: E1203 20:50:33.914798 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert podName:e875bdde-0dbd-40b6-a84c-1bdd7e4baabf nodeName:}" failed. No retries permitted until 2025-12-03 20:50:34.414783276 +0000 UTC m=+732.345328427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert") pod "frr-k8s-webhook-server-7fcb986d4-l7nmz" (UID: "e875bdde-0dbd-40b6-a84c-1bdd7e4baabf") : secret "frr-k8s-webhook-server-cert" not found Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914876 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-reloader\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914934 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-sockets\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.914967 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.915264 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-conf\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.915926 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d3648e48-1afd-42ec-9aec-4d91958639b9-frr-startup\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.926241 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3648e48-1afd-42ec-9aec-4d91958639b9-metrics-certs\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.942871 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvjk4\" (UniqueName: \"kubernetes.io/projected/fed0fc97-3d14-4716-ad43-4c3bfd606850-kube-api-access-dvjk4\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.943080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srh7w\" (UniqueName: \"kubernetes.io/projected/d3648e48-1afd-42ec-9aec-4d91958639b9-kube-api-access-srh7w\") pod \"frr-k8s-rtzp2\" (UID: \"d3648e48-1afd-42ec-9aec-4d91958639b9\") " pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:33 crc kubenswrapper[4765]: I1203 20:50:33.946115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb485\" (UniqueName: \"kubernetes.io/projected/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-kube-api-access-wb485\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.015668 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-metrics-certs\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.015735 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46ww\" (UniqueName: \"kubernetes.io/projected/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-kube-api-access-z46ww\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.015834 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-cert\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.017681 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.018635 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-metrics-certs\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.023723 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.031488 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-cert\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.032183 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46ww\" (UniqueName: \"kubernetes.io/projected/b248c7e1-c2a2-4c22-ab0f-fb221be60e58-kube-api-access-z46ww\") pod \"controller-f8648f98b-n78jt\" (UID: \"b248c7e1-c2a2-4c22-ab0f-fb221be60e58\") " pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.107560 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.294559 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-n78jt"] Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.421080 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.426993 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e875bdde-0dbd-40b6-a84c-1bdd7e4baabf-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l7nmz\" (UID: \"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.644513 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.669835 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.678436 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-metrics-certs\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.787057 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-n78jt" event={"ID":"b248c7e1-c2a2-4c22-ab0f-fb221be60e58","Type":"ContainerStarted","Data":"3117b43e1ad26400d62210ada312891420f3fccbb0520481b639c7197ad6df94"} Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.788427 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"1b4a651a2985210a9010580072971368f4d3d0033d52247a03d6baa81db56b74"} Dec 03 20:50:34 crc kubenswrapper[4765]: E1203 20:50:34.920268 4765 configmap.go:193] Couldn't get configMap metallb-system/metallb-excludel2: failed to sync configmap cache: timed out waiting for the condition Dec 03 20:50:34 crc kubenswrapper[4765]: E1203 20:50:34.920724 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2 podName:fed0fc97-3d14-4716-ad43-4c3bfd606850 nodeName:}" failed. No retries permitted until 2025-12-03 20:50:35.420700813 +0000 UTC m=+733.351245974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metallb-excludel2" (UniqueName: "kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2") pod "speaker-qvsp4" (UID: "fed0fc97-3d14-4716-ad43-4c3bfd606850") : failed to sync configmap cache: timed out waiting for the condition Dec 03 20:50:34 crc kubenswrapper[4765]: E1203 20:50:34.920398 4765 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: failed to sync secret cache: timed out waiting for the condition Dec 03 20:50:34 crc kubenswrapper[4765]: E1203 20:50:34.921104 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist podName:fed0fc97-3d14-4716-ad43-4c3bfd606850 nodeName:}" failed. No retries permitted until 2025-12-03 20:50:35.421092003 +0000 UTC m=+733.351637164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist") pod "speaker-qvsp4" (UID: "fed0fc97-3d14-4716-ad43-4c3bfd606850") : failed to sync secret cache: timed out waiting for the condition Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.934527 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz"] Dec 03 20:50:34 crc kubenswrapper[4765]: W1203 20:50:34.943515 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode875bdde_0dbd_40b6_a84c_1bdd7e4baabf.slice/crio-eae9e6b0a0cf2b92dc823e81e51b785f09c79b8ec7a760a5c9475a794b3fe275 WatchSource:0}: Error finding container eae9e6b0a0cf2b92dc823e81e51b785f09c79b8ec7a760a5c9475a794b3fe275: Status 404 returned error can't find the container with id eae9e6b0a0cf2b92dc823e81e51b785f09c79b8ec7a760a5c9475a794b3fe275 Dec 03 20:50:34 crc kubenswrapper[4765]: I1203 20:50:34.954175 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8cdtq" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.000035 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.153664 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.440155 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.440323 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.441981 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fed0fc97-3d14-4716-ad43-4c3bfd606850-metallb-excludel2\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.445512 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fed0fc97-3d14-4716-ad43-4c3bfd606850-memberlist\") pod \"speaker-qvsp4\" (UID: \"fed0fc97-3d14-4716-ad43-4c3bfd606850\") " pod="metallb-system/speaker-qvsp4" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.588369 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qvsp4" Dec 03 20:50:35 crc kubenswrapper[4765]: W1203 20:50:35.609971 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfed0fc97_3d14_4716_ad43_4c3bfd606850.slice/crio-5a262d8af9f8bbdf8becfa5c12584b841064fc62fbc7b1cf4820ce01f21b6c24 WatchSource:0}: Error finding container 5a262d8af9f8bbdf8becfa5c12584b841064fc62fbc7b1cf4820ce01f21b6c24: Status 404 returned error can't find the container with id 5a262d8af9f8bbdf8becfa5c12584b841064fc62fbc7b1cf4820ce01f21b6c24 Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.802438 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-n78jt" event={"ID":"b248c7e1-c2a2-4c22-ab0f-fb221be60e58","Type":"ContainerStarted","Data":"d7b9058c7c28c3d785ffde643ab073d57d731f92a53c9e6c41dcf532a9bf6ff3"} Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.802519 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-n78jt" event={"ID":"b248c7e1-c2a2-4c22-ab0f-fb221be60e58","Type":"ContainerStarted","Data":"3ca149cf4ebacc77e636a03632aacaf0babda5c061e567a72f33c138c606f285"} Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.802608 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.806496 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" event={"ID":"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf","Type":"ContainerStarted","Data":"eae9e6b0a0cf2b92dc823e81e51b785f09c79b8ec7a760a5c9475a794b3fe275"} Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.808847 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvsp4" event={"ID":"fed0fc97-3d14-4716-ad43-4c3bfd606850","Type":"ContainerStarted","Data":"5a262d8af9f8bbdf8becfa5c12584b841064fc62fbc7b1cf4820ce01f21b6c24"} Dec 03 20:50:35 crc kubenswrapper[4765]: I1203 20:50:35.830401 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-n78jt" podStartSLOduration=2.830382204 podStartE2EDuration="2.830382204s" podCreationTimestamp="2025-12-03 20:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:50:35.829195472 +0000 UTC m=+733.759740643" watchObservedRunningTime="2025-12-03 20:50:35.830382204 +0000 UTC m=+733.760927365" Dec 03 20:50:36 crc kubenswrapper[4765]: I1203 20:50:36.825521 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvsp4" event={"ID":"fed0fc97-3d14-4716-ad43-4c3bfd606850","Type":"ContainerStarted","Data":"1a7136291a262e9c13c14b1774890a260e241d9b5631439aa11233b2b2de838a"} Dec 03 20:50:37 crc kubenswrapper[4765]: I1203 20:50:37.833696 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qvsp4" event={"ID":"fed0fc97-3d14-4716-ad43-4c3bfd606850","Type":"ContainerStarted","Data":"20b09f677c62727043093acc6e1934b832556a9cb4372392148e68c451dfe7cb"} Dec 03 20:50:37 crc kubenswrapper[4765]: I1203 20:50:37.833798 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qvsp4" Dec 03 20:50:37 crc kubenswrapper[4765]: I1203 20:50:37.856024 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qvsp4" podStartSLOduration=4.856006419 podStartE2EDuration="4.856006419s" podCreationTimestamp="2025-12-03 20:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:50:37.852553046 +0000 UTC m=+735.783098197" watchObservedRunningTime="2025-12-03 20:50:37.856006419 +0000 UTC m=+735.786551570" Dec 03 20:50:41 crc kubenswrapper[4765]: I1203 20:50:41.867339 4765 generic.go:334] "Generic (PLEG): container finished" podID="d3648e48-1afd-42ec-9aec-4d91958639b9" containerID="b53eb7163458680f4d530cb8f85c079e74de9921e6d95ae4b5a87e62e12baeba" exitCode=0 Dec 03 20:50:41 crc kubenswrapper[4765]: I1203 20:50:41.867456 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerDied","Data":"b53eb7163458680f4d530cb8f85c079e74de9921e6d95ae4b5a87e62e12baeba"} Dec 03 20:50:41 crc kubenswrapper[4765]: I1203 20:50:41.870587 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" event={"ID":"e875bdde-0dbd-40b6-a84c-1bdd7e4baabf","Type":"ContainerStarted","Data":"f586453947fa56b66a2546d38218e83601b916e2a567bd9dc6e4a2eb4fd388dd"} Dec 03 20:50:41 crc kubenswrapper[4765]: I1203 20:50:41.870811 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:41 crc kubenswrapper[4765]: I1203 20:50:41.922205 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" podStartSLOduration=2.623029691 podStartE2EDuration="8.922172007s" podCreationTimestamp="2025-12-03 20:50:33 +0000 UTC" firstStartedPulling="2025-12-03 20:50:34.949660719 +0000 UTC m=+732.880205890" lastFinishedPulling="2025-12-03 20:50:41.248803045 +0000 UTC m=+739.179348206" observedRunningTime="2025-12-03 20:50:41.921457108 +0000 UTC m=+739.852002289" watchObservedRunningTime="2025-12-03 20:50:41.922172007 +0000 UTC m=+739.852717198" Dec 03 20:50:42 crc kubenswrapper[4765]: I1203 20:50:42.879008 4765 generic.go:334] "Generic (PLEG): container finished" podID="d3648e48-1afd-42ec-9aec-4d91958639b9" containerID="fa284f13802a7a71547bc8c950eb19b2c407c56fa7a18fae96abe78ac5f191dd" exitCode=0 Dec 03 20:50:42 crc kubenswrapper[4765]: I1203 20:50:42.879091 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerDied","Data":"fa284f13802a7a71547bc8c950eb19b2c407c56fa7a18fae96abe78ac5f191dd"} Dec 03 20:50:43 crc kubenswrapper[4765]: I1203 20:50:43.901166 4765 generic.go:334] "Generic (PLEG): container finished" podID="d3648e48-1afd-42ec-9aec-4d91958639b9" containerID="35ea673289d4800cf7e42427a5e096aba290025edde0bbb7376367e6c2a8170d" exitCode=0 Dec 03 20:50:43 crc kubenswrapper[4765]: I1203 20:50:43.901283 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerDied","Data":"35ea673289d4800cf7e42427a5e096aba290025edde0bbb7376367e6c2a8170d"} Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.110983 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-n78jt" Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.912653 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"ac4590dec3b3d7afbaa577af2f3bcbe481795d2ee50b0705ed24d112da802c6a"} Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.913006 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"d378e0b7b61d41ffca741120dd8e51a0cad59a8ab06785c5716c239bd8ebc325"} Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.913022 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"64e23cbebab7abf2807a6a76a389ab2006c5a0baedefa9d830e45981c0b2879e"} Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.913035 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"d9881396f73bd5e901de5a85709a989e2b17651fbb2c0b82d529774617b74c1f"} Dec 03 20:50:44 crc kubenswrapper[4765]: I1203 20:50:44.913046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"fab7e97f49149428bef2ad8e0399156353b6e376d4fa7a94293084983f436eb8"} Dec 03 20:50:45 crc kubenswrapper[4765]: I1203 20:50:45.930290 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rtzp2" event={"ID":"d3648e48-1afd-42ec-9aec-4d91958639b9","Type":"ContainerStarted","Data":"a63e5644ddda9192570fe9089e746a488d971e5ee395edc53d52d62dd0a0d6b3"} Dec 03 20:50:45 crc kubenswrapper[4765]: I1203 20:50:45.931744 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:45 crc kubenswrapper[4765]: I1203 20:50:45.965196 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rtzp2" podStartSLOduration=5.879450302 podStartE2EDuration="12.965167785s" podCreationTimestamp="2025-12-03 20:50:33 +0000 UTC" firstStartedPulling="2025-12-03 20:50:34.159073163 +0000 UTC m=+732.089618314" lastFinishedPulling="2025-12-03 20:50:41.244790646 +0000 UTC m=+739.175335797" observedRunningTime="2025-12-03 20:50:45.9627979 +0000 UTC m=+743.893343091" watchObservedRunningTime="2025-12-03 20:50:45.965167785 +0000 UTC m=+743.895712976" Dec 03 20:50:49 crc kubenswrapper[4765]: I1203 20:50:49.025137 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:49 crc kubenswrapper[4765]: I1203 20:50:49.091072 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:54 crc kubenswrapper[4765]: I1203 20:50:54.028726 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rtzp2" Dec 03 20:50:54 crc kubenswrapper[4765]: I1203 20:50:54.652362 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l7nmz" Dec 03 20:50:55 crc kubenswrapper[4765]: I1203 20:50:55.591806 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qvsp4" Dec 03 20:50:55 crc kubenswrapper[4765]: I1203 20:50:55.806880 4765 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.514796 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.518059 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.526882 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.529565 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.532483 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.532677 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5vjhk" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.601231 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5k6\" (UniqueName: \"kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6\") pod \"openstack-operator-index-sd579\" (UID: \"b51e701e-896a-4682-aa6c-abad5f6de76a\") " pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.702397 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5k6\" (UniqueName: \"kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6\") pod \"openstack-operator-index-sd579\" (UID: \"b51e701e-896a-4682-aa6c-abad5f6de76a\") " pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.720508 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5k6\" (UniqueName: \"kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6\") pod \"openstack-operator-index-sd579\" (UID: \"b51e701e-896a-4682-aa6c-abad5f6de76a\") " pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:50:58 crc kubenswrapper[4765]: I1203 20:50:58.848542 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:50:59 crc kubenswrapper[4765]: I1203 20:50:59.310773 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:51:00 crc kubenswrapper[4765]: I1203 20:51:00.031529 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sd579" event={"ID":"b51e701e-896a-4682-aa6c-abad5f6de76a","Type":"ContainerStarted","Data":"d9d2309aa8b013e5c965a39e8903435204eefa578769fb2d3551011299210c49"} Dec 03 20:51:01 crc kubenswrapper[4765]: I1203 20:51:01.875464 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.048215 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sd579" event={"ID":"b51e701e-896a-4682-aa6c-abad5f6de76a","Type":"ContainerStarted","Data":"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686"} Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.068000 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sd579" podStartSLOduration=2.322969074 podStartE2EDuration="4.067969976s" podCreationTimestamp="2025-12-03 20:50:58 +0000 UTC" firstStartedPulling="2025-12-03 20:50:59.32339298 +0000 UTC m=+757.253938171" lastFinishedPulling="2025-12-03 20:51:01.068393882 +0000 UTC m=+758.998939073" observedRunningTime="2025-12-03 20:51:02.06700629 +0000 UTC m=+759.997551481" watchObservedRunningTime="2025-12-03 20:51:02.067969976 +0000 UTC m=+759.998515127" Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.495473 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qn2lg"] Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.497853 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.503801 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qn2lg"] Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.557358 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86h7l\" (UniqueName: \"kubernetes.io/projected/86270547-80b6-44d5-971f-c260b5b7a106-kube-api-access-86h7l\") pod \"openstack-operator-index-qn2lg\" (UID: \"86270547-80b6-44d5-971f-c260b5b7a106\") " pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.658582 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86h7l\" (UniqueName: \"kubernetes.io/projected/86270547-80b6-44d5-971f-c260b5b7a106-kube-api-access-86h7l\") pod \"openstack-operator-index-qn2lg\" (UID: \"86270547-80b6-44d5-971f-c260b5b7a106\") " pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.694606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86h7l\" (UniqueName: \"kubernetes.io/projected/86270547-80b6-44d5-971f-c260b5b7a106-kube-api-access-86h7l\") pod \"openstack-operator-index-qn2lg\" (UID: \"86270547-80b6-44d5-971f-c260b5b7a106\") " pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:02 crc kubenswrapper[4765]: I1203 20:51:02.838271 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.054202 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sd579" podUID="b51e701e-896a-4682-aa6c-abad5f6de76a" containerName="registry-server" containerID="cri-o://4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686" gracePeriod=2 Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.342188 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qn2lg"] Dec 03 20:51:03 crc kubenswrapper[4765]: W1203 20:51:03.349822 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86270547_80b6_44d5_971f_c260b5b7a106.slice/crio-393f8399d18ebd43ab5112b81b5559435ed47d352d0e24fd235d850fac351ab2 WatchSource:0}: Error finding container 393f8399d18ebd43ab5112b81b5559435ed47d352d0e24fd235d850fac351ab2: Status 404 returned error can't find the container with id 393f8399d18ebd43ab5112b81b5559435ed47d352d0e24fd235d850fac351ab2 Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.385145 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.474213 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5k6\" (UniqueName: \"kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6\") pod \"b51e701e-896a-4682-aa6c-abad5f6de76a\" (UID: \"b51e701e-896a-4682-aa6c-abad5f6de76a\") " Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.480499 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6" (OuterVolumeSpecName: "kube-api-access-4k5k6") pod "b51e701e-896a-4682-aa6c-abad5f6de76a" (UID: "b51e701e-896a-4682-aa6c-abad5f6de76a"). InnerVolumeSpecName "kube-api-access-4k5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:51:03 crc kubenswrapper[4765]: I1203 20:51:03.577798 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k5k6\" (UniqueName: \"kubernetes.io/projected/b51e701e-896a-4682-aa6c-abad5f6de76a-kube-api-access-4k5k6\") on node \"crc\" DevicePath \"\"" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.064076 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qn2lg" event={"ID":"86270547-80b6-44d5-971f-c260b5b7a106","Type":"ContainerStarted","Data":"7c574b97b373fbcfd2d2d9c16c7fb9e65dbdf7deed7a5d83e7ecfb554a11897d"} Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.064127 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qn2lg" event={"ID":"86270547-80b6-44d5-971f-c260b5b7a106","Type":"ContainerStarted","Data":"393f8399d18ebd43ab5112b81b5559435ed47d352d0e24fd235d850fac351ab2"} Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.068174 4765 generic.go:334] "Generic (PLEG): container finished" podID="b51e701e-896a-4682-aa6c-abad5f6de76a" containerID="4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686" exitCode=0 Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.068209 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sd579" event={"ID":"b51e701e-896a-4682-aa6c-abad5f6de76a","Type":"ContainerDied","Data":"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686"} Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.068225 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sd579" event={"ID":"b51e701e-896a-4682-aa6c-abad5f6de76a","Type":"ContainerDied","Data":"d9d2309aa8b013e5c965a39e8903435204eefa578769fb2d3551011299210c49"} Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.068241 4765 scope.go:117] "RemoveContainer" containerID="4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.068257 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sd579" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.090278 4765 scope.go:117] "RemoveContainer" containerID="4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686" Dec 03 20:51:04 crc kubenswrapper[4765]: E1203 20:51:04.093523 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686\": container with ID starting with 4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686 not found: ID does not exist" containerID="4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.093634 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686"} err="failed to get container status \"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686\": rpc error: code = NotFound desc = could not find container \"4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686\": container with ID starting with 4a53f4ef9c204caa293df29547d844d616e6f3052c3235902b8c06c92e3bf686 not found: ID does not exist" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.096586 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qn2lg" podStartSLOduration=2.043300187 podStartE2EDuration="2.096557513s" podCreationTimestamp="2025-12-03 20:51:02 +0000 UTC" firstStartedPulling="2025-12-03 20:51:03.353345914 +0000 UTC m=+761.283891075" lastFinishedPulling="2025-12-03 20:51:03.40660325 +0000 UTC m=+761.337148401" observedRunningTime="2025-12-03 20:51:04.088832443 +0000 UTC m=+762.019377594" watchObservedRunningTime="2025-12-03 20:51:04.096557513 +0000 UTC m=+762.027102694" Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.121715 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.126116 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sd579"] Dec 03 20:51:04 crc kubenswrapper[4765]: I1203 20:51:04.372591 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51e701e-896a-4682-aa6c-abad5f6de76a" path="/var/lib/kubelet/pods/b51e701e-896a-4682-aa6c-abad5f6de76a/volumes" Dec 03 20:51:12 crc kubenswrapper[4765]: I1203 20:51:12.838760 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:12 crc kubenswrapper[4765]: I1203 20:51:12.839126 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:12 crc kubenswrapper[4765]: I1203 20:51:12.873508 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:13 crc kubenswrapper[4765]: I1203 20:51:13.170400 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qn2lg" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.540446 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2"] Dec 03 20:51:19 crc kubenswrapper[4765]: E1203 20:51:19.540980 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51e701e-896a-4682-aa6c-abad5f6de76a" containerName="registry-server" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.540992 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51e701e-896a-4682-aa6c-abad5f6de76a" containerName="registry-server" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.541107 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51e701e-896a-4682-aa6c-abad5f6de76a" containerName="registry-server" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.541877 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.543586 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zctd9" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.555737 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2"] Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.615892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.616005 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nkr\" (UniqueName: \"kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.616030 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.717481 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nkr\" (UniqueName: \"kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.717529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.717572 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.718049 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.718355 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.738606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4nkr\" (UniqueName: \"kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr\") pod \"dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:19 crc kubenswrapper[4765]: I1203 20:51:19.871703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:20 crc kubenswrapper[4765]: I1203 20:51:20.323277 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2"] Dec 03 20:51:21 crc kubenswrapper[4765]: I1203 20:51:21.212449 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerID="b74acdb0113ec733564810e2f113ceb6676849f6bdc7a76f8a886ef1d5e07094" exitCode=0 Dec 03 20:51:21 crc kubenswrapper[4765]: I1203 20:51:21.212506 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" event={"ID":"b2b9a2d2-8e49-45b0-b855-62ce65981a6c","Type":"ContainerDied","Data":"b74acdb0113ec733564810e2f113ceb6676849f6bdc7a76f8a886ef1d5e07094"} Dec 03 20:51:21 crc kubenswrapper[4765]: I1203 20:51:21.212810 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" event={"ID":"b2b9a2d2-8e49-45b0-b855-62ce65981a6c","Type":"ContainerStarted","Data":"84e90ff405c3d70fb948d332e48b8a12efc50c2d0f846197fbc591ea460bcf76"} Dec 03 20:51:23 crc kubenswrapper[4765]: I1203 20:51:23.229350 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerID="e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28" exitCode=0 Dec 03 20:51:23 crc kubenswrapper[4765]: I1203 20:51:23.229454 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" event={"ID":"b2b9a2d2-8e49-45b0-b855-62ce65981a6c","Type":"ContainerDied","Data":"e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28"} Dec 03 20:51:24 crc kubenswrapper[4765]: I1203 20:51:24.242073 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerID="93d97eab95b7825c7a2e60e0f2d425c273c1afd61bf76dc9e483bfda9667c962" exitCode=0 Dec 03 20:51:24 crc kubenswrapper[4765]: I1203 20:51:24.242177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" event={"ID":"b2b9a2d2-8e49-45b0-b855-62ce65981a6c","Type":"ContainerDied","Data":"93d97eab95b7825c7a2e60e0f2d425c273c1afd61bf76dc9e483bfda9667c962"} Dec 03 20:51:24 crc kubenswrapper[4765]: I1203 20:51:24.798840 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:51:24 crc kubenswrapper[4765]: I1203 20:51:24.798929 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.535812 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.705894 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util\") pod \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.706127 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4nkr\" (UniqueName: \"kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr\") pod \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.706270 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle\") pod \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\" (UID: \"b2b9a2d2-8e49-45b0-b855-62ce65981a6c\") " Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.707527 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle" (OuterVolumeSpecName: "bundle") pod "b2b9a2d2-8e49-45b0-b855-62ce65981a6c" (UID: "b2b9a2d2-8e49-45b0-b855-62ce65981a6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.722697 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util" (OuterVolumeSpecName: "util") pod "b2b9a2d2-8e49-45b0-b855-62ce65981a6c" (UID: "b2b9a2d2-8e49-45b0-b855-62ce65981a6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.736470 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr" (OuterVolumeSpecName: "kube-api-access-h4nkr") pod "b2b9a2d2-8e49-45b0-b855-62ce65981a6c" (UID: "b2b9a2d2-8e49-45b0-b855-62ce65981a6c"). InnerVolumeSpecName "kube-api-access-h4nkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.807977 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-util\") on node \"crc\" DevicePath \"\"" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.808018 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4nkr\" (UniqueName: \"kubernetes.io/projected/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-kube-api-access-h4nkr\") on node \"crc\" DevicePath \"\"" Dec 03 20:51:25 crc kubenswrapper[4765]: I1203 20:51:25.808029 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2b9a2d2-8e49-45b0-b855-62ce65981a6c-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:51:26 crc kubenswrapper[4765]: I1203 20:51:26.262090 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" event={"ID":"b2b9a2d2-8e49-45b0-b855-62ce65981a6c","Type":"ContainerDied","Data":"84e90ff405c3d70fb948d332e48b8a12efc50c2d0f846197fbc591ea460bcf76"} Dec 03 20:51:26 crc kubenswrapper[4765]: I1203 20:51:26.262132 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e90ff405c3d70fb948d332e48b8a12efc50c2d0f846197fbc591ea460bcf76" Dec 03 20:51:26 crc kubenswrapper[4765]: I1203 20:51:26.262192 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2" Dec 03 20:51:32 crc kubenswrapper[4765]: E1203 20:51:32.562824 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b9a2d2_8e49_45b0_b855_62ce65981a6c.slice/crio-conmon-e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.863866 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq"] Dec 03 20:51:32 crc kubenswrapper[4765]: E1203 20:51:32.871013 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="extract" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.871049 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="extract" Dec 03 20:51:32 crc kubenswrapper[4765]: E1203 20:51:32.871077 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="util" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.871086 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="util" Dec 03 20:51:32 crc kubenswrapper[4765]: E1203 20:51:32.871099 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="pull" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.871107 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="pull" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.871292 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b9a2d2-8e49-45b0-b855-62ce65981a6c" containerName="extract" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.871822 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.873347 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-69dqm" Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.881378 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq"] Dec 03 20:51:32 crc kubenswrapper[4765]: I1203 20:51:32.902925 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sxk\" (UniqueName: \"kubernetes.io/projected/59d4b087-73be-498b-b8f7-d6b067002ad5-kube-api-access-k5sxk\") pod \"openstack-operator-controller-operator-54ccb7f4-f26lq\" (UID: \"59d4b087-73be-498b-b8f7-d6b067002ad5\") " pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:33 crc kubenswrapper[4765]: I1203 20:51:33.004629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sxk\" (UniqueName: \"kubernetes.io/projected/59d4b087-73be-498b-b8f7-d6b067002ad5-kube-api-access-k5sxk\") pod \"openstack-operator-controller-operator-54ccb7f4-f26lq\" (UID: \"59d4b087-73be-498b-b8f7-d6b067002ad5\") " pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:33 crc kubenswrapper[4765]: I1203 20:51:33.028267 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sxk\" (UniqueName: \"kubernetes.io/projected/59d4b087-73be-498b-b8f7-d6b067002ad5-kube-api-access-k5sxk\") pod \"openstack-operator-controller-operator-54ccb7f4-f26lq\" (UID: \"59d4b087-73be-498b-b8f7-d6b067002ad5\") " pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:33 crc kubenswrapper[4765]: I1203 20:51:33.199536 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:33 crc kubenswrapper[4765]: I1203 20:51:33.644018 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq"] Dec 03 20:51:33 crc kubenswrapper[4765]: W1203 20:51:33.653950 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d4b087_73be_498b_b8f7_d6b067002ad5.slice/crio-94d6d8c7f6ef9a5e8f42b33c6e05608aa5c70f4d766a036ed0eef9d6be6b4003 WatchSource:0}: Error finding container 94d6d8c7f6ef9a5e8f42b33c6e05608aa5c70f4d766a036ed0eef9d6be6b4003: Status 404 returned error can't find the container with id 94d6d8c7f6ef9a5e8f42b33c6e05608aa5c70f4d766a036ed0eef9d6be6b4003 Dec 03 20:51:34 crc kubenswrapper[4765]: I1203 20:51:34.332937 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" event={"ID":"59d4b087-73be-498b-b8f7-d6b067002ad5","Type":"ContainerStarted","Data":"94d6d8c7f6ef9a5e8f42b33c6e05608aa5c70f4d766a036ed0eef9d6be6b4003"} Dec 03 20:51:38 crc kubenswrapper[4765]: I1203 20:51:38.358207 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" event={"ID":"59d4b087-73be-498b-b8f7-d6b067002ad5","Type":"ContainerStarted","Data":"32ecbb773e9decc146b9c8997a43a5b2ecb9060d7dcc2fd773739bad2b67cb9d"} Dec 03 20:51:38 crc kubenswrapper[4765]: I1203 20:51:38.358558 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:38 crc kubenswrapper[4765]: I1203 20:51:38.390393 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" podStartSLOduration=2.636354403 podStartE2EDuration="6.390365322s" podCreationTimestamp="2025-12-03 20:51:32 +0000 UTC" firstStartedPulling="2025-12-03 20:51:33.656052766 +0000 UTC m=+791.586597917" lastFinishedPulling="2025-12-03 20:51:37.410063665 +0000 UTC m=+795.340608836" observedRunningTime="2025-12-03 20:51:38.387573557 +0000 UTC m=+796.318118708" watchObservedRunningTime="2025-12-03 20:51:38.390365322 +0000 UTC m=+796.320910513" Dec 03 20:51:42 crc kubenswrapper[4765]: E1203 20:51:42.702733 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b9a2d2_8e49_45b0_b855_62ce65981a6c.slice/crio-conmon-e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:51:43 crc kubenswrapper[4765]: I1203 20:51:43.202179 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-54ccb7f4-f26lq" Dec 03 20:51:52 crc kubenswrapper[4765]: E1203 20:51:52.857171 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b9a2d2_8e49_45b0_b855_62ce65981a6c.slice/crio-conmon-e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:51:54 crc kubenswrapper[4765]: I1203 20:51:54.799665 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:51:54 crc kubenswrapper[4765]: I1203 20:51:54.800044 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:52:03 crc kubenswrapper[4765]: E1203 20:52:03.011757 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b9a2d2_8e49_45b0_b855_62ce65981a6c.slice/crio-conmon-e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:52:13 crc kubenswrapper[4765]: E1203 20:52:13.145044 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b9a2d2_8e49_45b0_b855_62ce65981a6c.slice/crio-conmon-e38cfabbf444c4bad0201b1a60628b4747659f81ed607117f04df6b6447faf28.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.110132 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.111611 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.113582 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-crhkp" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.117909 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.126514 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.127845 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.130044 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rw26z" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.131206 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tsn\" (UniqueName: \"kubernetes.io/projected/d17f6ecc-799c-415b-98e2-67f859a96a1a-kube-api-access-66tsn\") pod \"barbican-operator-controller-manager-7d9dfd778-ww6rq\" (UID: \"d17f6ecc-799c-415b-98e2-67f859a96a1a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.131247 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dww7j\" (UniqueName: \"kubernetes.io/projected/e7dd69d2-65b2-4677-b6ac-e90fd4c695c1-kube-api-access-dww7j\") pod \"cinder-operator-controller-manager-859b6ccc6-mvdp4\" (UID: \"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.157939 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.165060 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.184353 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.185058 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.185513 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.188032 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.188339 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.188437 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.188571 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lbldg" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.194469 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-f8w6n" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.194612 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gk25c" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.198969 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.207184 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.227176 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.228191 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.230579 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6xztw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232440 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tsn\" (UniqueName: \"kubernetes.io/projected/d17f6ecc-799c-415b-98e2-67f859a96a1a-kube-api-access-66tsn\") pod \"barbican-operator-controller-manager-7d9dfd778-ww6rq\" (UID: \"d17f6ecc-799c-415b-98e2-67f859a96a1a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232496 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsp7\" (UniqueName: \"kubernetes.io/projected/84cb39fe-086b-4822-b54f-a5af68d2203c-kube-api-access-gnsp7\") pod \"glance-operator-controller-manager-6d7f88c74f-76fch\" (UID: \"84cb39fe-086b-4822-b54f-a5af68d2203c\") " pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232517 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmc6\" (UniqueName: \"kubernetes.io/projected/797a4394-d04a-491b-8008-819165536dc0-kube-api-access-xdmc6\") pod \"horizon-operator-controller-manager-68c6d99b8f-9cdp5\" (UID: \"797a4394-d04a-491b-8008-819165536dc0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232556 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2g2v\" (UniqueName: \"kubernetes.io/projected/48ba0b62-8ac2-4059-ac6a-8643ee1ad149-kube-api-access-s2g2v\") pod \"heat-operator-controller-manager-5f64f6f8bb-m9fpm\" (UID: \"48ba0b62-8ac2-4059-ac6a-8643ee1ad149\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dww7j\" (UniqueName: \"kubernetes.io/projected/e7dd69d2-65b2-4677-b6ac-e90fd4c695c1-kube-api-access-dww7j\") pod \"cinder-operator-controller-manager-859b6ccc6-mvdp4\" (UID: \"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.232634 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g797\" (UniqueName: \"kubernetes.io/projected/50b1a98b-3f25-4b3f-9f55-fa99f3911561-kube-api-access-8g797\") pod \"designate-operator-controller-manager-78b4bc895b-czxt5\" (UID: \"50b1a98b-3f25-4b3f-9f55-fa99f3911561\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.253648 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.264979 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dww7j\" (UniqueName: \"kubernetes.io/projected/e7dd69d2-65b2-4677-b6ac-e90fd4c695c1-kube-api-access-dww7j\") pod \"cinder-operator-controller-manager-859b6ccc6-mvdp4\" (UID: \"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.265008 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tsn\" (UniqueName: \"kubernetes.io/projected/d17f6ecc-799c-415b-98e2-67f859a96a1a-kube-api-access-66tsn\") pod \"barbican-operator-controller-manager-7d9dfd778-ww6rq\" (UID: \"d17f6ecc-799c-415b-98e2-67f859a96a1a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.267108 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.268117 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.270924 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5b5kv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.297563 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.298760 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.302516 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.303397 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jsjh2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.309542 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.322073 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.334276 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zswn4\" (UniqueName: \"kubernetes.io/projected/6ba1b815-d381-4999-9d4d-9b9b595f6d06-kube-api-access-zswn4\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.339382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.339611 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnsp7\" (UniqueName: \"kubernetes.io/projected/84cb39fe-086b-4822-b54f-a5af68d2203c-kube-api-access-gnsp7\") pod \"glance-operator-controller-manager-6d7f88c74f-76fch\" (UID: \"84cb39fe-086b-4822-b54f-a5af68d2203c\") " pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.339690 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmc6\" (UniqueName: \"kubernetes.io/projected/797a4394-d04a-491b-8008-819165536dc0-kube-api-access-xdmc6\") pod \"horizon-operator-controller-manager-68c6d99b8f-9cdp5\" (UID: \"797a4394-d04a-491b-8008-819165536dc0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.339766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2g2v\" (UniqueName: \"kubernetes.io/projected/48ba0b62-8ac2-4059-ac6a-8643ee1ad149-kube-api-access-s2g2v\") pod \"heat-operator-controller-manager-5f64f6f8bb-m9fpm\" (UID: \"48ba0b62-8ac2-4059-ac6a-8643ee1ad149\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.339901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g797\" (UniqueName: \"kubernetes.io/projected/50b1a98b-3f25-4b3f-9f55-fa99f3911561-kube-api-access-8g797\") pod \"designate-operator-controller-manager-78b4bc895b-czxt5\" (UID: \"50b1a98b-3f25-4b3f-9f55-fa99f3911561\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.340045 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nqp\" (UniqueName: \"kubernetes.io/projected/4527f93e-9514-4750-9f1a-45d2fc649ef2-kube-api-access-66nqp\") pod \"ironic-operator-controller-manager-6c548fd776-vvxrw\" (UID: \"4527f93e-9514-4750-9f1a-45d2fc649ef2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.351473 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.352541 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.355265 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-czsvz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.361583 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2g2v\" (UniqueName: \"kubernetes.io/projected/48ba0b62-8ac2-4059-ac6a-8643ee1ad149-kube-api-access-s2g2v\") pod \"heat-operator-controller-manager-5f64f6f8bb-m9fpm\" (UID: \"48ba0b62-8ac2-4059-ac6a-8643ee1ad149\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.365642 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmc6\" (UniqueName: \"kubernetes.io/projected/797a4394-d04a-491b-8008-819165536dc0-kube-api-access-xdmc6\") pod \"horizon-operator-controller-manager-68c6d99b8f-9cdp5\" (UID: \"797a4394-d04a-491b-8008-819165536dc0\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.371903 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnsp7\" (UniqueName: \"kubernetes.io/projected/84cb39fe-086b-4822-b54f-a5af68d2203c-kube-api-access-gnsp7\") pod \"glance-operator-controller-manager-6d7f88c74f-76fch\" (UID: \"84cb39fe-086b-4822-b54f-a5af68d2203c\") " pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.377124 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g797\" (UniqueName: \"kubernetes.io/projected/50b1a98b-3f25-4b3f-9f55-fa99f3911561-kube-api-access-8g797\") pod \"designate-operator-controller-manager-78b4bc895b-czxt5\" (UID: \"50b1a98b-3f25-4b3f-9f55-fa99f3911561\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.380860 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.381796 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.383826 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sjk6c" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.420715 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.426067 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.432184 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.436568 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.437485 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.441459 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4sh\" (UniqueName: \"kubernetes.io/projected/a3cc780d-abf0-4a2b-99c3-67f9602a782f-kube-api-access-xf4sh\") pod \"keystone-operator-controller-manager-7765d96ddf-z6lzn\" (UID: \"a3cc780d-abf0-4a2b-99c3-67f9602a782f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.441520 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nqp\" (UniqueName: \"kubernetes.io/projected/4527f93e-9514-4750-9f1a-45d2fc649ef2-kube-api-access-66nqp\") pod \"ironic-operator-controller-manager-6c548fd776-vvxrw\" (UID: \"4527f93e-9514-4750-9f1a-45d2fc649ef2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.441557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbmt\" (UniqueName: \"kubernetes.io/projected/65cf60b9-98a5-4fe7-8675-28aadb893c7c-kube-api-access-jkbmt\") pod \"manila-operator-controller-manager-7c79b5df47-tjlbs\" (UID: \"65cf60b9-98a5-4fe7-8675-28aadb893c7c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.441596 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zswn4\" (UniqueName: \"kubernetes.io/projected/6ba1b815-d381-4999-9d4d-9b9b595f6d06-kube-api-access-zswn4\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.441639 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.441774 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.441820 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:20.941806458 +0000 UTC m=+838.872351599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.443246 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.464398 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nwkwz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.480574 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.489226 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nqp\" (UniqueName: \"kubernetes.io/projected/4527f93e-9514-4750-9f1a-45d2fc649ef2-kube-api-access-66nqp\") pod \"ironic-operator-controller-manager-6c548fd776-vvxrw\" (UID: \"4527f93e-9514-4750-9f1a-45d2fc649ef2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.544111 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.544401 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.548286 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zswn4\" (UniqueName: \"kubernetes.io/projected/6ba1b815-d381-4999-9d4d-9b9b595f6d06-kube-api-access-zswn4\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.554229 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgjx\" (UniqueName: \"kubernetes.io/projected/8d1cf8df-8469-41f4-a801-040210dfbb9f-kube-api-access-lvgjx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-442kz\" (UID: \"8d1cf8df-8469-41f4-a801-040210dfbb9f\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.554266 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbmt\" (UniqueName: \"kubernetes.io/projected/65cf60b9-98a5-4fe7-8675-28aadb893c7c-kube-api-access-jkbmt\") pod \"manila-operator-controller-manager-7c79b5df47-tjlbs\" (UID: \"65cf60b9-98a5-4fe7-8675-28aadb893c7c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.554342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4sh\" (UniqueName: \"kubernetes.io/projected/a3cc780d-abf0-4a2b-99c3-67f9602a782f-kube-api-access-xf4sh\") pod \"keystone-operator-controller-manager-7765d96ddf-z6lzn\" (UID: \"a3cc780d-abf0-4a2b-99c3-67f9602a782f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.559125 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.566674 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.569988 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.574099 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8wsvn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.578666 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.595203 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbmt\" (UniqueName: \"kubernetes.io/projected/65cf60b9-98a5-4fe7-8675-28aadb893c7c-kube-api-access-jkbmt\") pod \"manila-operator-controller-manager-7c79b5df47-tjlbs\" (UID: \"65cf60b9-98a5-4fe7-8675-28aadb893c7c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.595481 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.596401 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.596465 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.605028 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nlvs8" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.605621 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4sh\" (UniqueName: \"kubernetes.io/projected/a3cc780d-abf0-4a2b-99c3-67f9602a782f-kube-api-access-xf4sh\") pod \"keystone-operator-controller-manager-7765d96ddf-z6lzn\" (UID: \"a3cc780d-abf0-4a2b-99c3-67f9602a782f\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.624808 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.630473 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.631634 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.631893 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.637392 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qfgnb" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.646227 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.657032 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvgjx\" (UniqueName: \"kubernetes.io/projected/8d1cf8df-8469-41f4-a801-040210dfbb9f-kube-api-access-lvgjx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-442kz\" (UID: \"8d1cf8df-8469-41f4-a801-040210dfbb9f\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.681500 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvgjx\" (UniqueName: \"kubernetes.io/projected/8d1cf8df-8469-41f4-a801-040210dfbb9f-kube-api-access-lvgjx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-442kz\" (UID: \"8d1cf8df-8469-41f4-a801-040210dfbb9f\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.681583 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.682787 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.689420 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-88k27" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.689501 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.695863 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.697219 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.701569 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mvv6n" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.719438 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-dthq2"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.720562 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.723206 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.726643 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hwwcc" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.732003 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.745340 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.747749 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.750387 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mwzn7" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762784 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762818 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmspn\" (UniqueName: \"kubernetes.io/projected/f1d3e370-5bea-4bc9-9269-7483387b6e31-kube-api-access-zmspn\") pod \"placement-operator-controller-manager-78f8948974-dthq2\" (UID: \"f1d3e370-5bea-4bc9-9269-7483387b6e31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762847 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5g4z\" (UniqueName: \"kubernetes.io/projected/bbbe5e38-0e74-426e-9ada-b2d8be5f8444-kube-api-access-g5g4z\") pod \"octavia-operator-controller-manager-998648c74-bbb8g\" (UID: \"bbbe5e38-0e74-426e-9ada-b2d8be5f8444\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762881 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwzv\" (UniqueName: \"kubernetes.io/projected/5a7474c6-a9ec-40ba-8d04-49166a15bab5-kube-api-access-fdwzv\") pod \"ovn-operator-controller-manager-b6456fdb6-n9556\" (UID: \"5a7474c6-a9ec-40ba-8d04-49166a15bab5\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762904 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx5c\" (UniqueName: \"kubernetes.io/projected/f0dd713c-31a7-4816-9044-bf59d8931367-kube-api-access-wvx5c\") pod \"swift-operator-controller-manager-5f8c65bbfc-wmrgj\" (UID: \"f0dd713c-31a7-4816-9044-bf59d8931367\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7zrs\" (UniqueName: \"kubernetes.io/projected/df89edd4-fc6d-4b27-8947-fbe909852d74-kube-api-access-d7zrs\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-f4g9d\" (UID: \"df89edd4-fc6d-4b27-8947-fbe909852d74\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762941 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwxwg\" (UniqueName: \"kubernetes.io/projected/5f6f097a-e817-4f45-91fd-3c2d9d6b8d52-kube-api-access-fwxwg\") pod \"nova-operator-controller-manager-697bc559fc-x2qpv\" (UID: \"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.762958 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcjr\" (UniqueName: \"kubernetes.io/projected/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-kube-api-access-mtcjr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.764056 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.773980 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-dthq2"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.781681 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.790964 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.796603 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.824144 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.825367 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.832663 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wj7xl" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.832670 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.855247 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.860418 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7zrs\" (UniqueName: \"kubernetes.io/projected/df89edd4-fc6d-4b27-8947-fbe909852d74-kube-api-access-d7zrs\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-f4g9d\" (UID: \"df89edd4-fc6d-4b27-8947-fbe909852d74\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863717 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwxwg\" (UniqueName: \"kubernetes.io/projected/5f6f097a-e817-4f45-91fd-3c2d9d6b8d52-kube-api-access-fwxwg\") pod \"nova-operator-controller-manager-697bc559fc-x2qpv\" (UID: \"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863739 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtcjr\" (UniqueName: \"kubernetes.io/projected/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-kube-api-access-mtcjr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863776 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dnj\" (UniqueName: \"kubernetes.io/projected/64675126-66c0-4cac-ad4e-764c10e0c344-kube-api-access-k4dnj\") pod \"telemetry-operator-controller-manager-76cc84c6bb-w955s\" (UID: \"64675126-66c0-4cac-ad4e-764c10e0c344\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863801 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6g7h\" (UniqueName: \"kubernetes.io/projected/629580d2-72ea-481f-b78e-e5b6631dfda4-kube-api-access-z6g7h\") pod \"test-operator-controller-manager-5854674fcc-h7pk2\" (UID: \"629580d2-72ea-481f-b78e-e5b6631dfda4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmspn\" (UniqueName: \"kubernetes.io/projected/f1d3e370-5bea-4bc9-9269-7483387b6e31-kube-api-access-zmspn\") pod \"placement-operator-controller-manager-78f8948974-dthq2\" (UID: \"f1d3e370-5bea-4bc9-9269-7483387b6e31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5g4z\" (UniqueName: \"kubernetes.io/projected/bbbe5e38-0e74-426e-9ada-b2d8be5f8444-kube-api-access-g5g4z\") pod \"octavia-operator-controller-manager-998648c74-bbb8g\" (UID: \"bbbe5e38-0e74-426e-9ada-b2d8be5f8444\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863910 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwzv\" (UniqueName: \"kubernetes.io/projected/5a7474c6-a9ec-40ba-8d04-49166a15bab5-kube-api-access-fdwzv\") pod \"ovn-operator-controller-manager-b6456fdb6-n9556\" (UID: \"5a7474c6-a9ec-40ba-8d04-49166a15bab5\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.863932 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvx5c\" (UniqueName: \"kubernetes.io/projected/f0dd713c-31a7-4816-9044-bf59d8931367-kube-api-access-wvx5c\") pod \"swift-operator-controller-manager-5f8c65bbfc-wmrgj\" (UID: \"f0dd713c-31a7-4816-9044-bf59d8931367\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.865343 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-464n8" Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.865391 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.865551 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:21.365537372 +0000 UTC m=+839.296082523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.872661 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.891411 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.892800 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.901342 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-l8dq7" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.905249 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvx5c\" (UniqueName: \"kubernetes.io/projected/f0dd713c-31a7-4816-9044-bf59d8931367-kube-api-access-wvx5c\") pod \"swift-operator-controller-manager-5f8c65bbfc-wmrgj\" (UID: \"f0dd713c-31a7-4816-9044-bf59d8931367\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.914206 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.925721 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwxwg\" (UniqueName: \"kubernetes.io/projected/5f6f097a-e817-4f45-91fd-3c2d9d6b8d52-kube-api-access-fwxwg\") pod \"nova-operator-controller-manager-697bc559fc-x2qpv\" (UID: \"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.925885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7zrs\" (UniqueName: \"kubernetes.io/projected/df89edd4-fc6d-4b27-8947-fbe909852d74-kube-api-access-d7zrs\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-f4g9d\" (UID: \"df89edd4-fc6d-4b27-8947-fbe909852d74\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.926036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmspn\" (UniqueName: \"kubernetes.io/projected/f1d3e370-5bea-4bc9-9269-7483387b6e31-kube-api-access-zmspn\") pod \"placement-operator-controller-manager-78f8948974-dthq2\" (UID: \"f1d3e370-5bea-4bc9-9269-7483387b6e31\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.927010 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5g4z\" (UniqueName: \"kubernetes.io/projected/bbbe5e38-0e74-426e-9ada-b2d8be5f8444-kube-api-access-g5g4z\") pod \"octavia-operator-controller-manager-998648c74-bbb8g\" (UID: \"bbbe5e38-0e74-426e-9ada-b2d8be5f8444\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.942866 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwzv\" (UniqueName: \"kubernetes.io/projected/5a7474c6-a9ec-40ba-8d04-49166a15bab5-kube-api-access-fdwzv\") pod \"ovn-operator-controller-manager-b6456fdb6-n9556\" (UID: \"5a7474c6-a9ec-40ba-8d04-49166a15bab5\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.942954 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtcjr\" (UniqueName: \"kubernetes.io/projected/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-kube-api-access-mtcjr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.943116 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.964382 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547c884594-d98p4"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.965468 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.964588 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.968065 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.968661 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.969819 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9fcjf" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.971871 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.971990 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kstz\" (UniqueName: \"kubernetes.io/projected/016c4fd7-25b8-42b0-ba5d-1008cd28b8b3-kube-api-access-2kstz\") pod \"watcher-operator-controller-manager-769dc69bc-f5s59\" (UID: \"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.972042 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4dnj\" (UniqueName: \"kubernetes.io/projected/64675126-66c0-4cac-ad4e-764c10e0c344-kube-api-access-k4dnj\") pod \"telemetry-operator-controller-manager-76cc84c6bb-w955s\" (UID: \"64675126-66c0-4cac-ad4e-764c10e0c344\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.972089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6g7h\" (UniqueName: \"kubernetes.io/projected/629580d2-72ea-481f-b78e-e5b6631dfda4-kube-api-access-z6g7h\") pod \"test-operator-controller-manager-5854674fcc-h7pk2\" (UID: \"629580d2-72ea-481f-b78e-e5b6631dfda4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.972291 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: E1203 20:52:20.972357 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:21.972341014 +0000 UTC m=+839.902886165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.973673 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547c884594-d98p4"] Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.979439 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:20 crc kubenswrapper[4765]: I1203 20:52:20.988075 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4dnj\" (UniqueName: \"kubernetes.io/projected/64675126-66c0-4cac-ad4e-764c10e0c344-kube-api-access-k4dnj\") pod \"telemetry-operator-controller-manager-76cc84c6bb-w955s\" (UID: \"64675126-66c0-4cac-ad4e-764c10e0c344\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.003077 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6g7h\" (UniqueName: \"kubernetes.io/projected/629580d2-72ea-481f-b78e-e5b6631dfda4-kube-api-access-z6g7h\") pod \"test-operator-controller-manager-5854674fcc-h7pk2\" (UID: \"629580d2-72ea-481f-b78e-e5b6631dfda4\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.034396 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.035347 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.038657 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zxt7s" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.040623 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.042962 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.073278 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kstz\" (UniqueName: \"kubernetes.io/projected/016c4fd7-25b8-42b0-ba5d-1008cd28b8b3-kube-api-access-2kstz\") pod \"watcher-operator-controller-manager-769dc69bc-f5s59\" (UID: \"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.073342 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.073394 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94zw\" (UniqueName: \"kubernetes.io/projected/19b04cd5-57c6-4494-a08b-f425c37bf13a-kube-api-access-c94zw\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.073414 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.073484 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjxg\" (UniqueName: \"kubernetes.io/projected/47ff88bb-97bc-4d0b-a24b-64559741aa30-kube-api-access-lrjxg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b8kv2\" (UID: \"47ff88bb-97bc-4d0b-a24b-64559741aa30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.081992 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.102528 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kstz\" (UniqueName: \"kubernetes.io/projected/016c4fd7-25b8-42b0-ba5d-1008cd28b8b3-kube-api-access-2kstz\") pod \"watcher-operator-controller-manager-769dc69bc-f5s59\" (UID: \"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.105476 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.156246 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.175399 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.175450 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94zw\" (UniqueName: \"kubernetes.io/projected/19b04cd5-57c6-4494-a08b-f425c37bf13a-kube-api-access-c94zw\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.175493 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.175578 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.175682 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:21.67565839 +0000 UTC m=+839.606203591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.175975 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.176033 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:21.676003949 +0000 UTC m=+839.606549100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.175587 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjxg\" (UniqueName: \"kubernetes.io/projected/47ff88bb-97bc-4d0b-a24b-64559741aa30-kube-api-access-lrjxg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b8kv2\" (UID: \"47ff88bb-97bc-4d0b-a24b-64559741aa30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.196359 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjxg\" (UniqueName: \"kubernetes.io/projected/47ff88bb-97bc-4d0b-a24b-64559741aa30-kube-api-access-lrjxg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b8kv2\" (UID: \"47ff88bb-97bc-4d0b-a24b-64559741aa30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.208049 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94zw\" (UniqueName: \"kubernetes.io/projected/19b04cd5-57c6-4494-a08b-f425c37bf13a-kube-api-access-c94zw\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.273245 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.303657 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.389968 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.390179 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.390231 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:22.39021601 +0000 UTC m=+840.320761151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.433064 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.449673 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.556235 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4"] Dec 03 20:52:21 crc kubenswrapper[4765]: W1203 20:52:21.561972 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7dd69d2_65b2_4677_b6ac_e90fd4c695c1.slice/crio-5166063acae6a4f440e994e91f15840c262baa24afae00d5355930a28151153c WatchSource:0}: Error finding container 5166063acae6a4f440e994e91f15840c262baa24afae00d5355930a28151153c: Status 404 returned error can't find the container with id 5166063acae6a4f440e994e91f15840c262baa24afae00d5355930a28151153c Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.573193 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw"] Dec 03 20:52:21 crc kubenswrapper[4765]: W1203 20:52:21.578767 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4527f93e_9514_4750_9f1a_45d2fc649ef2.slice/crio-5f4d056b4c0f397e95b9ca4e5f8fc59c3e13099a980ca3d388e49f8241298a49 WatchSource:0}: Error finding container 5f4d056b4c0f397e95b9ca4e5f8fc59c3e13099a980ca3d388e49f8241298a49: Status 404 returned error can't find the container with id 5f4d056b4c0f397e95b9ca4e5f8fc59c3e13099a980ca3d388e49f8241298a49 Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.693132 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.693202 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.693284 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.693357 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:22.693341047 +0000 UTC m=+840.623886198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.693396 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.693478 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:22.69345885 +0000 UTC m=+840.624004071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.699882 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" event={"ID":"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1","Type":"ContainerStarted","Data":"5166063acae6a4f440e994e91f15840c262baa24afae00d5355930a28151153c"} Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.701229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" event={"ID":"4527f93e-9514-4750-9f1a-45d2fc649ef2","Type":"ContainerStarted","Data":"5f4d056b4c0f397e95b9ca4e5f8fc59c3e13099a980ca3d388e49f8241298a49"} Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.702352 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" event={"ID":"d17f6ecc-799c-415b-98e2-67f859a96a1a","Type":"ContainerStarted","Data":"fb95c8215bfbd07ca9b8f977d5ae59b66d6200509737e3d303ba80ce41bd2b36"} Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.903956 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d"] Dec 03 20:52:21 crc kubenswrapper[4765]: W1203 20:52:21.922558 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3cc780d_abf0_4a2b_99c3_67f9602a782f.slice/crio-a6d412aefa5b0b1a0228ae8260cf7cae1cca37a2fd3276957767e49fdd2e86f8 WatchSource:0}: Error finding container a6d412aefa5b0b1a0228ae8260cf7cae1cca37a2fd3276957767e49fdd2e86f8: Status 404 returned error can't find the container with id a6d412aefa5b0b1a0228ae8260cf7cae1cca37a2fd3276957767e49fdd2e86f8 Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.926986 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn"] Dec 03 20:52:21 crc kubenswrapper[4765]: W1203 20:52:21.929454 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84cb39fe_086b_4822_b54f_a5af68d2203c.slice/crio-a3c8b5d553aefbd609f79c22bb0d9f9f8387cbf40504f40815bff4987bca1545 WatchSource:0}: Error finding container a3c8b5d553aefbd609f79c22bb0d9f9f8387cbf40504f40815bff4987bca1545: Status 404 returned error can't find the container with id a3c8b5d553aefbd609f79c22bb0d9f9f8387cbf40504f40815bff4987bca1545 Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.951368 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.973830 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5"] Dec 03 20:52:21 crc kubenswrapper[4765]: W1203 20:52:21.974663 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ba0b62_8ac2_4059_ac6a_8643ee1ad149.slice/crio-605acdd5543e756b07c023ac86bc8eb9752d9f57a087ecfc7c475b6e07b90553 WatchSource:0}: Error finding container 605acdd5543e756b07c023ac86bc8eb9752d9f57a087ecfc7c475b6e07b90553: Status 404 returned error can't find the container with id 605acdd5543e756b07c023ac86bc8eb9752d9f57a087ecfc7c475b6e07b90553 Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.983036 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.987618 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm"] Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.997281 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.997641 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: E1203 20:52:21.997680 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:23.997666516 +0000 UTC m=+841.928211667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:21 crc kubenswrapper[4765]: I1203 20:52:21.997700 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs"] Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.004897 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz"] Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.007776 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.020555 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4dnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-w955s_openstack-operators(64675126-66c0-4cac-ad4e-764c10e0c344): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.021835 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.026680 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fwxwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-x2qpv_openstack-operators(5f6f097a-e817-4f45-91fd-3c2d9d6b8d52): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.026805 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdwzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-n9556_openstack-operators(5a7474c6-a9ec-40ba-8d04-49166a15bab5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.026894 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4dnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-w955s_openstack-operators(64675126-66c0-4cac-ad4e-764c10e0c344): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.029134 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" podUID="64675126-66c0-4cac-ad4e-764c10e0c344" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.029656 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-dthq2"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.033272 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdwzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-n9556_openstack-operators(5a7474c6-a9ec-40ba-8d04-49166a15bab5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.033503 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fwxwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-x2qpv_openstack-operators(5f6f097a-e817-4f45-91fd-3c2d9d6b8d52): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.036011 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" podUID="5f6f097a-e817-4f45-91fd-3c2d9d6b8d52" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.036195 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.036451 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" podUID="5a7474c6-a9ec-40ba-8d04-49166a15bab5" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.041264 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.045732 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmspn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-dthq2_openstack-operators(f1d3e370-5bea-4bc9-9269-7483387b6e31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.046478 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.056460 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmspn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-dthq2_openstack-operators(f1d3e370-5bea-4bc9-9269-7483387b6e31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.057472 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6g7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h7pk2_openstack-operators(629580d2-72ea-481f-b78e-e5b6631dfda4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.057513 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" podUID="f1d3e370-5bea-4bc9-9269-7483387b6e31" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.060192 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6g7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-h7pk2_openstack-operators(629580d2-72ea-481f-b78e-e5b6631dfda4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.061282 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" podUID="629580d2-72ea-481f-b78e-e5b6631dfda4" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.191246 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59"] Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.205938 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2"] Dec 03 20:52:22 crc kubenswrapper[4765]: W1203 20:52:22.209085 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016c4fd7_25b8_42b0_ba5d_1008cd28b8b3.slice/crio-e4ad61f66c76ee8fb8540e0ab6886d6e166c1f05150fc5d5800f495527ab3c4e WatchSource:0}: Error finding container e4ad61f66c76ee8fb8540e0ab6886d6e166c1f05150fc5d5800f495527ab3c4e: Status 404 returned error can't find the container with id e4ad61f66c76ee8fb8540e0ab6886d6e166c1f05150fc5d5800f495527ab3c4e Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.210600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj"] Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.218049 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrjxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-b8kv2_openstack-operators(47ff88bb-97bc-4d0b-a24b-64559741aa30): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.219458 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" podUID="47ff88bb-97bc-4d0b-a24b-64559741aa30" Dec 03 20:52:22 crc kubenswrapper[4765]: W1203 20:52:22.222892 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0dd713c_31a7_4816_9044_bf59d8931367.slice/crio-629f46008396b10fd32673757cc0c8908bd48236648d5cdc80079fff9f27b19c WatchSource:0}: Error finding container 629f46008396b10fd32673757cc0c8908bd48236648d5cdc80079fff9f27b19c: Status 404 returned error can't find the container with id 629f46008396b10fd32673757cc0c8908bd48236648d5cdc80079fff9f27b19c Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.226990 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvx5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-wmrgj_openstack-operators(f0dd713c-31a7-4816-9044-bf59d8931367): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.229550 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvx5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-wmrgj_openstack-operators(f0dd713c-31a7-4816-9044-bf59d8931367): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.231353 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" podUID="f0dd713c-31a7-4816-9044-bf59d8931367" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.411667 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.411798 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.411847 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:24.41183529 +0000 UTC m=+842.342380441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.711694 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" event={"ID":"df89edd4-fc6d-4b27-8947-fbe909852d74","Type":"ContainerStarted","Data":"352ec0d15b969f3ef34a98fdc895443be741a7a31034f4c2155b3a8865a36822"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.713268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" event={"ID":"8d1cf8df-8469-41f4-a801-040210dfbb9f","Type":"ContainerStarted","Data":"875d5a0766c8c672dd9efb6926df3e3f8cb435cdd7f5cdfd3c39a210c4514a67"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.714615 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.714739 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.714799 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.714863 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.714872 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:24.714853834 +0000 UTC m=+842.645398985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.714905 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:24.714892015 +0000 UTC m=+842.645437166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.715491 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" event={"ID":"a3cc780d-abf0-4a2b-99c3-67f9602a782f","Type":"ContainerStarted","Data":"a6d412aefa5b0b1a0228ae8260cf7cae1cca37a2fd3276957767e49fdd2e86f8"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.716619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" event={"ID":"f1d3e370-5bea-4bc9-9269-7483387b6e31","Type":"ContainerStarted","Data":"d61b3bdd4909a6f30123351eeac62a4ba2b7346b6ced346d449aa30a5d0f2edd"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.719796 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" event={"ID":"47ff88bb-97bc-4d0b-a24b-64559741aa30","Type":"ContainerStarted","Data":"2657b8c5ddf4ae5b680be02f1c37e99c99beb2df452c9c0c7202d73cfa971528"} Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.721003 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" podUID="47ff88bb-97bc-4d0b-a24b-64559741aa30" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.721138 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" podUID="f1d3e370-5bea-4bc9-9269-7483387b6e31" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.721652 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" event={"ID":"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52","Type":"ContainerStarted","Data":"278490afc2baaa17bd00f4457f5829893819df6c58711ad22e3e58b136c94a16"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.722896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" event={"ID":"5a7474c6-a9ec-40ba-8d04-49166a15bab5","Type":"ContainerStarted","Data":"0b06bed4de4326b7bbbadd05f92bf2367785f42a384bacc170f6e0b5a131120e"} Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.724614 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" podUID="5a7474c6-a9ec-40ba-8d04-49166a15bab5" Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.724683 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" podUID="5f6f097a-e817-4f45-91fd-3c2d9d6b8d52" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.727356 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" event={"ID":"f0dd713c-31a7-4816-9044-bf59d8931367","Type":"ContainerStarted","Data":"629f46008396b10fd32673757cc0c8908bd48236648d5cdc80079fff9f27b19c"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.728969 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" event={"ID":"84cb39fe-086b-4822-b54f-a5af68d2203c","Type":"ContainerStarted","Data":"a3c8b5d553aefbd609f79c22bb0d9f9f8387cbf40504f40815bff4987bca1545"} Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.735319 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" podUID="f0dd713c-31a7-4816-9044-bf59d8931367" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.738086 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" event={"ID":"65cf60b9-98a5-4fe7-8675-28aadb893c7c","Type":"ContainerStarted","Data":"32d122c31ccb7251cc7a8ceb922c7ad19288632adf2d240ffc4d9a57ed402e3e"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.741143 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" event={"ID":"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3","Type":"ContainerStarted","Data":"e4ad61f66c76ee8fb8540e0ab6886d6e166c1f05150fc5d5800f495527ab3c4e"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.743086 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" event={"ID":"797a4394-d04a-491b-8008-819165536dc0","Type":"ContainerStarted","Data":"7f022df1cbc145e76e1c6a108d35a353b628f732cdc9d7f86ba76daa9c1fde52"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.746205 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" event={"ID":"64675126-66c0-4cac-ad4e-764c10e0c344","Type":"ContainerStarted","Data":"1bc5de3e69ca55d74071bae28d718417b521f4da9b683f1a12b85bfa64ef3e53"} Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.750079 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" podUID="64675126-66c0-4cac-ad4e-764c10e0c344" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.751116 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" event={"ID":"50b1a98b-3f25-4b3f-9f55-fa99f3911561","Type":"ContainerStarted","Data":"bc92eedd8a46d0359216c30b66062d8c2244100b4ecbd821d342bac6ae95972d"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.754169 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" event={"ID":"bbbe5e38-0e74-426e-9ada-b2d8be5f8444","Type":"ContainerStarted","Data":"642a5f42177e79a402b80db217344defbcb2072badebc64795956063a2548eb0"} Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.763086 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" event={"ID":"629580d2-72ea-481f-b78e-e5b6631dfda4","Type":"ContainerStarted","Data":"6518270084eb5f95c7e514cc0e932a2e821cc044080b18c2cf160e38080d0228"} Dec 03 20:52:22 crc kubenswrapper[4765]: E1203 20:52:22.765849 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" podUID="629580d2-72ea-481f-b78e-e5b6631dfda4" Dec 03 20:52:22 crc kubenswrapper[4765]: I1203 20:52:22.767270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" event={"ID":"48ba0b62-8ac2-4059-ac6a-8643ee1ad149","Type":"ContainerStarted","Data":"605acdd5543e756b07c023ac86bc8eb9752d9f57a087ecfc7c475b6e07b90553"} Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.784682 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" podUID="47ff88bb-97bc-4d0b-a24b-64559741aa30" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785250 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" podUID="5a7474c6-a9ec-40ba-8d04-49166a15bab5" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785330 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" podUID="5f6f097a-e817-4f45-91fd-3c2d9d6b8d52" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785371 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" podUID="f1d3e370-5bea-4bc9-9269-7483387b6e31" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785638 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" podUID="629580d2-72ea-481f-b78e-e5b6631dfda4" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785823 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" podUID="f0dd713c-31a7-4816-9044-bf59d8931367" Dec 03 20:52:23 crc kubenswrapper[4765]: E1203 20:52:23.785894 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" podUID="64675126-66c0-4cac-ad4e-764c10e0c344" Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.044570 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.044733 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.044781 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:28.044767443 +0000 UTC m=+845.975312714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.451011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.451247 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.451377 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:28.451350083 +0000 UTC m=+846.381895274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.754795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.754927 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.755109 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.755167 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:28.755149294 +0000 UTC m=+846.685694455 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.755559 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: E1203 20:52:24.755591 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:28.755581456 +0000 UTC m=+846.686126617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.798134 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.798190 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.798234 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.798896 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:52:24 crc kubenswrapper[4765]: I1203 20:52:24.798979 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33" gracePeriod=600 Dec 03 20:52:25 crc kubenswrapper[4765]: I1203 20:52:25.807710 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33" exitCode=0 Dec 03 20:52:25 crc kubenswrapper[4765]: I1203 20:52:25.807767 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33"} Dec 03 20:52:25 crc kubenswrapper[4765]: I1203 20:52:25.807809 4765 scope.go:117] "RemoveContainer" containerID="9aa4b32617093128f6bf7ab64206090db11f5d644179d39ee68c6b4891662abe" Dec 03 20:52:28 crc kubenswrapper[4765]: I1203 20:52:28.100929 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.101127 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.101446 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:36.101422917 +0000 UTC m=+854.031968068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: I1203 20:52:28.505467 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.505702 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.505793 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:36.505769749 +0000 UTC m=+854.436314920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: I1203 20:52:28.809039 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:28 crc kubenswrapper[4765]: I1203 20:52:28.809207 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.809238 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.809321 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:36.809290141 +0000 UTC m=+854.739835292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.809353 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:28 crc kubenswrapper[4765]: E1203 20:52:28.809432 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:36.809396254 +0000 UTC m=+854.739941425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:33 crc kubenswrapper[4765]: E1203 20:52:33.189346 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dww7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-mvdp4_openstack-operators(e7dd69d2-65b2-4677-b6ac-e90fd4c695c1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:33 crc kubenswrapper[4765]: E1203 20:52:33.189377 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kstz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-f5s59_openstack-operators(016c4fd7-25b8-42b0-ba5d-1008cd28b8b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 20:52:33 crc kubenswrapper[4765]: E1203 20:52:33.191221 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" podUID="016c4fd7-25b8-42b0-ba5d-1008cd28b8b3" Dec 03 20:52:33 crc kubenswrapper[4765]: E1203 20:52:33.191269 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" podUID="e7dd69d2-65b2-4677-b6ac-e90fd4c695c1" Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.901152 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" event={"ID":"50b1a98b-3f25-4b3f-9f55-fa99f3911561","Type":"ContainerStarted","Data":"075fa4db1d5a1aee4b4648dc96abcadefc46aa01a266861499842c359c53b53f"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.909513 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" event={"ID":"65cf60b9-98a5-4fe7-8675-28aadb893c7c","Type":"ContainerStarted","Data":"663b2b3baa4c8d83d25f5ab410ae5f7fc3c71e52f4c0baf861d05f734fe8c41f"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.922922 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" event={"ID":"797a4394-d04a-491b-8008-819165536dc0","Type":"ContainerStarted","Data":"fe7f6e308ed9c18ba4c7be49e41216536473ed7abfc892f6620efb3f90e7e726"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.928526 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" event={"ID":"84cb39fe-086b-4822-b54f-a5af68d2203c","Type":"ContainerStarted","Data":"50aaa8726b0be7bf54b44ec652279b220ce3f58b3369897266d705736a4cf0d0"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.929818 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" event={"ID":"a3cc780d-abf0-4a2b-99c3-67f9602a782f","Type":"ContainerStarted","Data":"de2a58123828e0d7863db67aa70b5d98ac943d04a105a559272c2f441f950a08"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.932516 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" event={"ID":"df89edd4-fc6d-4b27-8947-fbe909852d74","Type":"ContainerStarted","Data":"f543947e2d63c17b285415aaf49a9dbfca7787a4ba24d2bbc2f301db96c42cab"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.933510 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" event={"ID":"bbbe5e38-0e74-426e-9ada-b2d8be5f8444","Type":"ContainerStarted","Data":"561fba15b2fceea73647b951538f5c4d382b928226eeefa333409a352690a063"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.934365 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" event={"ID":"d17f6ecc-799c-415b-98e2-67f859a96a1a","Type":"ContainerStarted","Data":"0df7911554029a5ea5f8b797b05c588c2e3b16c0dfbab78f65c7f2f34356c097"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.935608 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.947973 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" event={"ID":"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3","Type":"ContainerStarted","Data":"fcde59fdd4696097b7d0f34c95ef51179aa291a6466d7fc5d32117736c307fbc"} Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.948562 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:33 crc kubenswrapper[4765]: E1203 20:52:33.956607 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" podUID="016c4fd7-25b8-42b0-ba5d-1008cd28b8b3" Dec 03 20:52:33 crc kubenswrapper[4765]: I1203 20:52:33.972678 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" event={"ID":"48ba0b62-8ac2-4059-ac6a-8643ee1ad149","Type":"ContainerStarted","Data":"4f18eafba04fa1fc6142cd3bd41efdf822a66fa9986b2fe1e63fc0682bc2047c"} Dec 03 20:52:34 crc kubenswrapper[4765]: I1203 20:52:34.001589 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" event={"ID":"4527f93e-9514-4750-9f1a-45d2fc649ef2","Type":"ContainerStarted","Data":"62db681e70e8d847ffa3e8c87da9097db97b1b15bcfd2f65a0d9b59bfb9b5b37"} Dec 03 20:52:34 crc kubenswrapper[4765]: I1203 20:52:34.014896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" event={"ID":"8d1cf8df-8469-41f4-a801-040210dfbb9f","Type":"ContainerStarted","Data":"66234a5546fc024d83831058381a525e9efd00cda7dd790277fa0b821aebade2"} Dec 03 20:52:34 crc kubenswrapper[4765]: I1203 20:52:34.021359 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" event={"ID":"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1","Type":"ContainerStarted","Data":"a89b364180fcf23f925e26bb5b65d855ee3323351255a764f0141a80edcde87b"} Dec 03 20:52:34 crc kubenswrapper[4765]: I1203 20:52:34.021631 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:34 crc kubenswrapper[4765]: E1203 20:52:34.029269 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" podUID="e7dd69d2-65b2-4677-b6ac-e90fd4c695c1" Dec 03 20:52:35 crc kubenswrapper[4765]: E1203 20:52:35.030583 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" podUID="016c4fd7-25b8-42b0-ba5d-1008cd28b8b3" Dec 03 20:52:35 crc kubenswrapper[4765]: E1203 20:52:35.030702 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" podUID="e7dd69d2-65b2-4677-b6ac-e90fd4c695c1" Dec 03 20:52:36 crc kubenswrapper[4765]: I1203 20:52:36.118794 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.118994 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.119503 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert podName:6ba1b815-d381-4999-9d4d-9b9b595f6d06 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:52.119487557 +0000 UTC m=+870.050032708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert") pod "infra-operator-controller-manager-57548d458d-7fw8v" (UID: "6ba1b815-d381-4999-9d4d-9b9b595f6d06") : secret "infra-operator-webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: I1203 20:52:36.527597 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.527909 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.527988 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert podName:5e62f5de-bd17-4c8d-bc3f-0ce237d6e266 nodeName:}" failed. No retries permitted until 2025-12-03 20:52:52.527968889 +0000 UTC m=+870.458514050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" (UID: "5e62f5de-bd17-4c8d-bc3f-0ce237d6e266") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: I1203 20:52:36.831936 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:36 crc kubenswrapper[4765]: I1203 20:52:36.832024 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.832255 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.832256 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.832401 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:52.832379896 +0000 UTC m=+870.762925047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "webhook-server-cert" not found Dec 03 20:52:36 crc kubenswrapper[4765]: E1203 20:52:36.832431 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs podName:19b04cd5-57c6-4494-a08b-f425c37bf13a nodeName:}" failed. No retries permitted until 2025-12-03 20:52:52.832421197 +0000 UTC m=+870.762966448 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs") pod "openstack-operator-controller-manager-547c884594-d98p4" (UID: "19b04cd5-57c6-4494-a08b-f425c37bf13a") : secret "metrics-server-cert" not found Dec 03 20:52:40 crc kubenswrapper[4765]: I1203 20:52:40.484879 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" Dec 03 20:52:40 crc kubenswrapper[4765]: E1203 20:52:40.487920 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" podUID="e7dd69d2-65b2-4677-b6ac-e90fd4c695c1" Dec 03 20:52:41 crc kubenswrapper[4765]: I1203 20:52:41.308099 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" Dec 03 20:52:41 crc kubenswrapper[4765]: E1203 20:52:41.310864 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" podUID="016c4fd7-25b8-42b0-ba5d-1008cd28b8b3" Dec 03 20:52:45 crc kubenswrapper[4765]: E1203 20:52:45.502810 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 03 20:52:45 crc kubenswrapper[4765]: E1203 20:52:45.503498 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkbmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-tjlbs_openstack-operators(65cf60b9-98a5-4fe7-8675-28aadb893c7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:52:45 crc kubenswrapper[4765]: E1203 20:52:45.504687 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" podUID="65cf60b9-98a5-4fe7-8675-28aadb893c7c" Dec 03 20:52:46 crc kubenswrapper[4765]: I1203 20:52:46.134323 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:46 crc kubenswrapper[4765]: I1203 20:52:46.139347 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" Dec 03 20:52:51 crc kubenswrapper[4765]: I1203 20:52:51.182799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" event={"ID":"f1d3e370-5bea-4bc9-9269-7483387b6e31","Type":"ContainerStarted","Data":"34f0c225ca380d42f2bfac70eb509de93b811899504271c4c42ebefff4f1220b"} Dec 03 20:52:51 crc kubenswrapper[4765]: I1203 20:52:51.185234 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" event={"ID":"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52","Type":"ContainerStarted","Data":"036169ef1b13ea8795fbff3e4af950d99ca474e55e5e763f131f6c8b741f0b4f"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.180408 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.189939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ba1b815-d381-4999-9d4d-9b9b595f6d06-cert\") pod \"infra-operator-controller-manager-57548d458d-7fw8v\" (UID: \"6ba1b815-d381-4999-9d4d-9b9b595f6d06\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.205207 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" event={"ID":"629580d2-72ea-481f-b78e-e5b6631dfda4","Type":"ContainerStarted","Data":"c6190932bcb09db95eaff5754bd8c45aac4a575233ae7d4b84e5c4e8529f08e0"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.205259 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" event={"ID":"629580d2-72ea-481f-b78e-e5b6631dfda4","Type":"ContainerStarted","Data":"420653c38fa02ded34b796410cdd3bcb1463e3a0f98df658787cace0a67cbaf2"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.205431 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.206647 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" event={"ID":"5a7474c6-a9ec-40ba-8d04-49166a15bab5","Type":"ContainerStarted","Data":"46ccf6359612fb902e2439b4518d64af61ccdd0b22cfccb4e82c2f68a1cf9257"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.206675 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" event={"ID":"5a7474c6-a9ec-40ba-8d04-49166a15bab5","Type":"ContainerStarted","Data":"d4f4dab7b0884abacf4f1c06c0b2f7e97a8c397c5fabca8bf3d7cb398b416968"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.206823 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.207912 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" event={"ID":"f0dd713c-31a7-4816-9044-bf59d8931367","Type":"ContainerStarted","Data":"bd164b63f576d8cfd656de3c7529f788f64d924971a057ba841a7fe5556b2804"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.207935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" event={"ID":"f0dd713c-31a7-4816-9044-bf59d8931367","Type":"ContainerStarted","Data":"7ee36c1418985970b6a7060f0fb8884aaf37cf1ef6f3a2e74241e969cab45524"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.208090 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.209229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" event={"ID":"84cb39fe-086b-4822-b54f-a5af68d2203c","Type":"ContainerStarted","Data":"95a20d32b8a27acbd37879fc290a5b427c3966515be9c783a2ef20b7798a2fb1"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.209276 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.210706 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" event={"ID":"64675126-66c0-4cac-ad4e-764c10e0c344","Type":"ContainerStarted","Data":"be6484bdd1546713a916d667817d4f548b56de111a8a7c4387909ec8a275c200"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.210735 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" event={"ID":"64675126-66c0-4cac-ad4e-764c10e0c344","Type":"ContainerStarted","Data":"c50af5abdd934db39df898c90f344317d210458b2d60e4fbef2d4ee51310649e"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.210877 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.211901 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.212222 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" event={"ID":"50b1a98b-3f25-4b3f-9f55-fa99f3911561","Type":"ContainerStarted","Data":"eb9b9ff333a48515761b0f4fb39eb5b296ec75915c985ffd3f29bfbf3faad384"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.212411 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.213677 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" event={"ID":"df89edd4-fc6d-4b27-8947-fbe909852d74","Type":"ContainerStarted","Data":"281b4a5a85ab6b26b2462f3b61ee1ae73a5e83c5c3e6002086e3d4a0f2f2cc54"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.213909 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.214057 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.215585 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.217607 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" event={"ID":"8d1cf8df-8469-41f4-a801-040210dfbb9f","Type":"ContainerStarted","Data":"c8a33e4fe4150849256301d27389ae790302805a314ad2b3b74aa58429555ed2"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.217793 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.218887 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.219069 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" event={"ID":"5f6f097a-e817-4f45-91fd-3c2d9d6b8d52","Type":"ContainerStarted","Data":"60e9ab2ec50e4a48067dfccab7a3722414154ec39cb5afb6e2071d717b723db4"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.219187 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.220578 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" event={"ID":"48ba0b62-8ac2-4059-ac6a-8643ee1ad149","Type":"ContainerStarted","Data":"bc34d05265744b9d9fd2ead0dcac059d8bcd73debc326879eb16578afd77457a"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.220703 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.222047 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" event={"ID":"797a4394-d04a-491b-8008-819165536dc0","Type":"ContainerStarted","Data":"d3d72dea360b6c14aaf071cc8f96dee1a73f2078ff19e1007e3f343416d90f6e"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.222792 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.223096 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.223764 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.224445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" event={"ID":"4527f93e-9514-4750-9f1a-45d2fc649ef2","Type":"ContainerStarted","Data":"abaeb7c3a83292f70b5956bd43cd2d9a141cc1e6ac96c69825eeca4754c840a8"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.225966 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" event={"ID":"d17f6ecc-799c-415b-98e2-67f859a96a1a","Type":"ContainerStarted","Data":"1c77c11083e5e04de081ae1ccda4b393978c1c248dd641bc3785c84eed3184ec"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.226650 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.232241 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" event={"ID":"a3cc780d-abf0-4a2b-99c3-67f9602a782f","Type":"ContainerStarted","Data":"8209e35b4515efe10f9a8f0665a673fe01a1afddddd1edf93495957227444aaf"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.232328 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.233142 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" podStartSLOduration=3.562260773 podStartE2EDuration="32.233127151s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.057322408 +0000 UTC m=+839.987867559" lastFinishedPulling="2025-12-03 20:52:50.728188756 +0000 UTC m=+868.658733937" observedRunningTime="2025-12-03 20:52:52.231879768 +0000 UTC m=+870.162424939" watchObservedRunningTime="2025-12-03 20:52:52.233127151 +0000 UTC m=+870.163672302" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.234344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" event={"ID":"f1d3e370-5bea-4bc9-9269-7483387b6e31","Type":"ContainerStarted","Data":"e1857f37a2ab57c280a5a5bc25edfc9ffde82f71da0c9f291be312eacffd3e9c"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.234689 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.234709 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.236151 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.236260 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" event={"ID":"65cf60b9-98a5-4fe7-8675-28aadb893c7c","Type":"ContainerStarted","Data":"975ee1dbaa160629a2000a5fc1f1aa3f45b105e829dd3a5af8cb16ee40d7f4bc"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.237889 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" event={"ID":"47ff88bb-97bc-4d0b-a24b-64559741aa30","Type":"ContainerStarted","Data":"8ecc24e3fb5f825b5dcc27454ca07c2986e08627745915625b2981eec77755ff"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.243078 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" event={"ID":"bbbe5e38-0e74-426e-9ada-b2d8be5f8444","Type":"ContainerStarted","Data":"3a7d4994c5b4a7903f617fdc59aaaed62646f2700728614aaa7424b10e988812"} Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.243221 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.247320 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.265483 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" podStartSLOduration=3.5573096680000003 podStartE2EDuration="32.265464887s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.026750177 +0000 UTC m=+839.957295328" lastFinishedPulling="2025-12-03 20:52:50.734905396 +0000 UTC m=+868.665450547" observedRunningTime="2025-12-03 20:52:52.264315137 +0000 UTC m=+870.194860288" watchObservedRunningTime="2025-12-03 20:52:52.265464887 +0000 UTC m=+870.196010028" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.290741 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" podStartSLOduration=3.568806484 podStartE2EDuration="32.290721493s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.020367884 +0000 UTC m=+839.950913035" lastFinishedPulling="2025-12-03 20:52:50.742282893 +0000 UTC m=+868.672828044" observedRunningTime="2025-12-03 20:52:52.290359203 +0000 UTC m=+870.220904354" watchObservedRunningTime="2025-12-03 20:52:52.290721493 +0000 UTC m=+870.221266644" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.312095 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6d7f88c74f-76fch" podStartSLOduration=3.438940584 podStartE2EDuration="32.312077025s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.937285926 +0000 UTC m=+839.867831077" lastFinishedPulling="2025-12-03 20:52:50.810422367 +0000 UTC m=+868.740967518" observedRunningTime="2025-12-03 20:52:52.310466851 +0000 UTC m=+870.241012002" watchObservedRunningTime="2025-12-03 20:52:52.312077025 +0000 UTC m=+870.242622176" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.354250 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-czxt5" podStartSLOduration=8.494858293 podStartE2EDuration="32.354231863s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.945065437 +0000 UTC m=+839.875610588" lastFinishedPulling="2025-12-03 20:52:45.804439017 +0000 UTC m=+863.734984158" observedRunningTime="2025-12-03 20:52:52.35001954 +0000 UTC m=+870.280564711" watchObservedRunningTime="2025-12-03 20:52:52.354231863 +0000 UTC m=+870.284777014" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.374693 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" podStartSLOduration=3.863197186 podStartE2EDuration="32.37467663s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.226853694 +0000 UTC m=+840.157398845" lastFinishedPulling="2025-12-03 20:52:50.738333138 +0000 UTC m=+868.668878289" observedRunningTime="2025-12-03 20:52:52.370073806 +0000 UTC m=+870.300618957" watchObservedRunningTime="2025-12-03 20:52:52.37467663 +0000 UTC m=+870.305221781" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.391390 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-m9fpm" podStartSLOduration=3.64755888 podStartE2EDuration="32.391370677s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.994532841 +0000 UTC m=+839.925077992" lastFinishedPulling="2025-12-03 20:52:50.738344638 +0000 UTC m=+868.668889789" observedRunningTime="2025-12-03 20:52:52.387815332 +0000 UTC m=+870.318360493" watchObservedRunningTime="2025-12-03 20:52:52.391370677 +0000 UTC m=+870.321915828" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.413750 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ww6rq" podStartSLOduration=3.137965458 podStartE2EDuration="32.413731945s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.497553287 +0000 UTC m=+839.428098438" lastFinishedPulling="2025-12-03 20:52:50.773319774 +0000 UTC m=+868.703864925" observedRunningTime="2025-12-03 20:52:52.411801953 +0000 UTC m=+870.342347114" watchObservedRunningTime="2025-12-03 20:52:52.413731945 +0000 UTC m=+870.344277096" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.427738 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" podStartSLOduration=3.221492426 podStartE2EDuration="32.427722739s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.58158917 +0000 UTC m=+839.512134321" lastFinishedPulling="2025-12-03 20:52:50.787819483 +0000 UTC m=+868.718364634" observedRunningTime="2025-12-03 20:52:52.42736623 +0000 UTC m=+870.357911391" watchObservedRunningTime="2025-12-03 20:52:52.427722739 +0000 UTC m=+870.358267880" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.453255 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jsjh2" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.459436 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.460551 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-442kz" podStartSLOduration=3.666341914 podStartE2EDuration="32.460531388s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.996168836 +0000 UTC m=+839.926713987" lastFinishedPulling="2025-12-03 20:52:50.79035831 +0000 UTC m=+868.720903461" observedRunningTime="2025-12-03 20:52:52.459202732 +0000 UTC m=+870.389747883" watchObservedRunningTime="2025-12-03 20:52:52.460531388 +0000 UTC m=+870.391076539" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.504659 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-9cdp5" podStartSLOduration=4.197000694 podStartE2EDuration="32.504641498s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.944712138 +0000 UTC m=+839.875257289" lastFinishedPulling="2025-12-03 20:52:50.252352932 +0000 UTC m=+868.182898093" observedRunningTime="2025-12-03 20:52:52.502162972 +0000 UTC m=+870.432708113" watchObservedRunningTime="2025-12-03 20:52:52.504641498 +0000 UTC m=+870.435186649" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.521909 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" podStartSLOduration=9.387126226 podStartE2EDuration="32.52189367s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.026541451 +0000 UTC m=+839.957086602" lastFinishedPulling="2025-12-03 20:52:45.161308895 +0000 UTC m=+863.091854046" observedRunningTime="2025-12-03 20:52:52.521598961 +0000 UTC m=+870.452144112" watchObservedRunningTime="2025-12-03 20:52:52.52189367 +0000 UTC m=+870.452438821" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.542157 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-f4g9d" podStartSLOduration=3.654504534 podStartE2EDuration="32.542137642s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.915439492 +0000 UTC m=+839.845984643" lastFinishedPulling="2025-12-03 20:52:50.8030726 +0000 UTC m=+868.733617751" observedRunningTime="2025-12-03 20:52:52.54023816 +0000 UTC m=+870.470783311" watchObservedRunningTime="2025-12-03 20:52:52.542137642 +0000 UTC m=+870.472682793" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.558466 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b8kv2" podStartSLOduration=3.976506385 podStartE2EDuration="32.558451538s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.217941942 +0000 UTC m=+840.148487093" lastFinishedPulling="2025-12-03 20:52:50.799887095 +0000 UTC m=+868.730432246" observedRunningTime="2025-12-03 20:52:52.55778032 +0000 UTC m=+870.488325471" watchObservedRunningTime="2025-12-03 20:52:52.558451538 +0000 UTC m=+870.488996689" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.589029 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.605896 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bbb8g" podStartSLOduration=3.870763663 podStartE2EDuration="32.605874198s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.99413733 +0000 UTC m=+839.924682481" lastFinishedPulling="2025-12-03 20:52:50.729247865 +0000 UTC m=+868.659793016" observedRunningTime="2025-12-03 20:52:52.595656534 +0000 UTC m=+870.526201685" watchObservedRunningTime="2025-12-03 20:52:52.605874198 +0000 UTC m=+870.536419349" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.607034 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e62f5de-bd17-4c8d-bc3f-0ce237d6e266-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96\" (UID: \"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.656349 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-z6lzn" podStartSLOduration=8.230425637 podStartE2EDuration="32.656293767s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.94331763 +0000 UTC m=+839.873862791" lastFinishedPulling="2025-12-03 20:52:46.36918577 +0000 UTC m=+864.299730921" observedRunningTime="2025-12-03 20:52:52.6556762 +0000 UTC m=+870.586221351" watchObservedRunningTime="2025-12-03 20:52:52.656293767 +0000 UTC m=+870.586838918" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.681373 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" podStartSLOduration=3.98644784 podStartE2EDuration="32.681354177s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.045562998 +0000 UTC m=+839.976108149" lastFinishedPulling="2025-12-03 20:52:50.740469335 +0000 UTC m=+868.671014486" observedRunningTime="2025-12-03 20:52:52.678234293 +0000 UTC m=+870.608779434" watchObservedRunningTime="2025-12-03 20:52:52.681354177 +0000 UTC m=+870.611899328" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.740780 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-tjlbs" podStartSLOduration=22.109518039 podStartE2EDuration="32.740757957s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.992718382 +0000 UTC m=+839.923263533" lastFinishedPulling="2025-12-03 20:52:32.6239583 +0000 UTC m=+850.554503451" observedRunningTime="2025-12-03 20:52:52.72256998 +0000 UTC m=+870.653115131" watchObservedRunningTime="2025-12-03 20:52:52.740757957 +0000 UTC m=+870.671303098" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.813114 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-88k27" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.821833 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.892065 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.892485 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.897984 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-webhook-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:52 crc kubenswrapper[4765]: I1203 20:52:52.914138 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19b04cd5-57c6-4494-a08b-f425c37bf13a-metrics-certs\") pod \"openstack-operator-controller-manager-547c884594-d98p4\" (UID: \"19b04cd5-57c6-4494-a08b-f425c37bf13a\") " pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.000438 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v"] Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.130621 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9fcjf" Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.139717 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.300490 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" event={"ID":"6ba1b815-d381-4999-9d4d-9b9b595f6d06","Type":"ContainerStarted","Data":"eaa722e65b2303d6bd124c5061acb7abb8f0d436a622aba8ae521eb43d013e3e"} Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.304710 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.306376 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-vvxrw" Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.362710 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96"] Dec 03 20:52:53 crc kubenswrapper[4765]: I1203 20:52:53.672834 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-547c884594-d98p4"] Dec 03 20:52:53 crc kubenswrapper[4765]: W1203 20:52:53.678521 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b04cd5_57c6_4494_a08b_f425c37bf13a.slice/crio-c0220bc52e68a13290dc432058662bd1744606544efade0980165941e071e653 WatchSource:0}: Error finding container c0220bc52e68a13290dc432058662bd1744606544efade0980165941e071e653: Status 404 returned error can't find the container with id c0220bc52e68a13290dc432058662bd1744606544efade0980165941e071e653 Dec 03 20:52:54 crc kubenswrapper[4765]: I1203 20:52:54.311220 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" event={"ID":"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266","Type":"ContainerStarted","Data":"6a5daf7e59425a69e363a34885096e6560fefc25831b1a00db3d5d6faf1a3e70"} Dec 03 20:52:54 crc kubenswrapper[4765]: I1203 20:52:54.313350 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" event={"ID":"19b04cd5-57c6-4494-a08b-f425c37bf13a","Type":"ContainerStarted","Data":"dc237b73389203b1fb5e148c9fb130b9cd0dc4abd4a82102ade62ea9818ca437"} Dec 03 20:52:54 crc kubenswrapper[4765]: I1203 20:52:54.313399 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" event={"ID":"19b04cd5-57c6-4494-a08b-f425c37bf13a","Type":"ContainerStarted","Data":"c0220bc52e68a13290dc432058662bd1744606544efade0980165941e071e653"} Dec 03 20:52:54 crc kubenswrapper[4765]: I1203 20:52:54.340608 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" podStartSLOduration=34.340590492 podStartE2EDuration="34.340590492s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:52:54.339498762 +0000 UTC m=+872.270043913" watchObservedRunningTime="2025-12-03 20:52:54.340590492 +0000 UTC m=+872.271135643" Dec 03 20:52:55 crc kubenswrapper[4765]: I1203 20:52:55.318326 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.327591 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" event={"ID":"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266","Type":"ContainerStarted","Data":"e59857571aec3db5db1a4b0f7bab0192f36a820657d81663e61d7a7109c8314f"} Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.327808 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.327820 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" event={"ID":"5e62f5de-bd17-4c8d-bc3f-0ce237d6e266","Type":"ContainerStarted","Data":"d0fb89256dc2d857825db8978f9707440841b09a75060b1a3ccd2730b875b9ef"} Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.331372 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" event={"ID":"6ba1b815-d381-4999-9d4d-9b9b595f6d06","Type":"ContainerStarted","Data":"984006a344e0d7b7b507a4dda4242158fb18b519067cb66ea8a4ca9abcdc3066"} Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.331445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" event={"ID":"6ba1b815-d381-4999-9d4d-9b9b595f6d06","Type":"ContainerStarted","Data":"1c58adbaffd323ca1fcfc2532db0f95f44ac0d71f59d40db4e84aa7a5b92dc6d"} Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.331630 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.334510 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" event={"ID":"016c4fd7-25b8-42b0-ba5d-1008cd28b8b3","Type":"ContainerStarted","Data":"4f20de2580c71794322c976d2909a3259668edf8e9dabdd903d1b447138973e0"} Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.366947 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" podStartSLOduration=34.503994085 podStartE2EDuration="36.36690306s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:53.359460005 +0000 UTC m=+871.290005156" lastFinishedPulling="2025-12-03 20:52:55.22236898 +0000 UTC m=+873.152914131" observedRunningTime="2025-12-03 20:52:56.362028919 +0000 UTC m=+874.292574070" watchObservedRunningTime="2025-12-03 20:52:56.36690306 +0000 UTC m=+874.297448251" Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.394539 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" podStartSLOduration=34.195718394 podStartE2EDuration="36.394519309s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:53.011138592 +0000 UTC m=+870.941683743" lastFinishedPulling="2025-12-03 20:52:55.209939507 +0000 UTC m=+873.140484658" observedRunningTime="2025-12-03 20:52:56.38821092 +0000 UTC m=+874.318756081" watchObservedRunningTime="2025-12-03 20:52:56.394519309 +0000 UTC m=+874.325064470" Dec 03 20:52:56 crc kubenswrapper[4765]: I1203 20:52:56.418529 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-f5s59" podStartSLOduration=26.009623396 podStartE2EDuration="36.418510631s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:22.215541957 +0000 UTC m=+840.146087108" lastFinishedPulling="2025-12-03 20:52:32.624429192 +0000 UTC m=+850.554974343" observedRunningTime="2025-12-03 20:52:56.414396461 +0000 UTC m=+874.344941602" watchObservedRunningTime="2025-12-03 20:52:56.418510631 +0000 UTC m=+874.349055802" Dec 03 20:52:57 crc kubenswrapper[4765]: I1203 20:52:57.342505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" event={"ID":"e7dd69d2-65b2-4677-b6ac-e90fd4c695c1","Type":"ContainerStarted","Data":"89b8140807dcd6bc99e3eec7d63e158791b87a0f7aac6f50fdf5da7a652de623"} Dec 03 20:52:57 crc kubenswrapper[4765]: I1203 20:52:57.371499 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-mvdp4" podStartSLOduration=26.302597552 podStartE2EDuration="37.371481645s" podCreationTimestamp="2025-12-03 20:52:20 +0000 UTC" firstStartedPulling="2025-12-03 20:52:21.570472888 +0000 UTC m=+839.501018039" lastFinishedPulling="2025-12-03 20:52:32.639356941 +0000 UTC m=+850.569902132" observedRunningTime="2025-12-03 20:52:57.365910986 +0000 UTC m=+875.296456137" watchObservedRunningTime="2025-12-03 20:52:57.371481645 +0000 UTC m=+875.302026796" Dec 03 20:53:00 crc kubenswrapper[4765]: I1203 20:53:00.967558 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-x2qpv" Dec 03 20:53:01 crc kubenswrapper[4765]: I1203 20:53:01.046038 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-n9556" Dec 03 20:53:01 crc kubenswrapper[4765]: I1203 20:53:01.085775 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-dthq2" Dec 03 20:53:01 crc kubenswrapper[4765]: I1203 20:53:01.109848 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wmrgj" Dec 03 20:53:01 crc kubenswrapper[4765]: I1203 20:53:01.162514 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-w955s" Dec 03 20:53:01 crc kubenswrapper[4765]: I1203 20:53:01.276705 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h7pk2" Dec 03 20:53:02 crc kubenswrapper[4765]: I1203 20:53:02.465388 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7fw8v" Dec 03 20:53:02 crc kubenswrapper[4765]: I1203 20:53:02.828395 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96" Dec 03 20:53:03 crc kubenswrapper[4765]: I1203 20:53:03.149993 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-547c884594-d98p4" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.721101 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.724509 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.728441 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.728595 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.729035 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.729150 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wcnwm" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.745274 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.775155 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.780077 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.781924 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.794408 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.871060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.871156 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmlt\" (UniqueName: \"kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.972683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmdl\" (UniqueName: \"kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.972734 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmlt\" (UniqueName: \"kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.972904 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.972990 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.973072 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.973868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:19 crc kubenswrapper[4765]: I1203 20:53:19.991605 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmlt\" (UniqueName: \"kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt\") pod \"dnsmasq-dns-675f4bcbfc-r7jqg\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.045184 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.074420 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.074537 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.074598 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjmdl\" (UniqueName: \"kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.075449 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.075621 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.096153 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjmdl\" (UniqueName: \"kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl\") pod \"dnsmasq-dns-78dd6ddcc-9trzj\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.393804 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.564230 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.575341 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 20:53:20 crc kubenswrapper[4765]: I1203 20:53:20.856721 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:20 crc kubenswrapper[4765]: W1203 20:53:20.864135 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07d11db5_65c4_4afc_92bf_b402e557991c.slice/crio-0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029 WatchSource:0}: Error finding container 0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029: Status 404 returned error can't find the container with id 0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029 Dec 03 20:53:21 crc kubenswrapper[4765]: I1203 20:53:21.533841 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" event={"ID":"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a","Type":"ContainerStarted","Data":"6286ec113f7c5db48f1673dead88d22a2fd374b8348d6d4623370877b54fbeb3"} Dec 03 20:53:21 crc kubenswrapper[4765]: I1203 20:53:21.536263 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" event={"ID":"07d11db5-65c4-4afc-92bf-b402e557991c","Type":"ContainerStarted","Data":"0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029"} Dec 03 20:53:22 crc kubenswrapper[4765]: I1203 20:53:22.908020 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:22 crc kubenswrapper[4765]: I1203 20:53:22.924177 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:53:22 crc kubenswrapper[4765]: I1203 20:53:22.926176 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:22 crc kubenswrapper[4765]: I1203 20:53:22.938396 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.028402 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.028495 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.028525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpzc\" (UniqueName: \"kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.129746 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.129880 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.129919 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpzc\" (UniqueName: \"kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.130967 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.131136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.159221 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpzc\" (UniqueName: \"kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc\") pod \"dnsmasq-dns-666b6646f7-cxdww\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.252677 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.356618 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.403280 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.404825 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.409241 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.436170 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dx9\" (UniqueName: \"kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.436216 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.436278 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.537883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dx9\" (UniqueName: \"kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.537928 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.537982 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.539106 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.539517 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.578450 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dx9\" (UniqueName: \"kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9\") pod \"dnsmasq-dns-57d769cc4f-8p5tm\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.738878 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:53:23 crc kubenswrapper[4765]: I1203 20:53:23.921976 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.240021 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.583133 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.584570 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.590187 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.591743 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.595744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.595794 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.595891 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.595760 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596082 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596144 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.595765 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596280 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596346 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596394 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bz8v8" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596427 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596440 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596547 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w2d8x" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.596709 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.606561 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" event={"ID":"ba11afbd-0cb2-4489-b31f-e4092d7a8e14","Type":"ContainerStarted","Data":"c17096e19c25946fad06bc8efc358399f56817808e406cf3891389b033d1364a"} Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.607570 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" event={"ID":"a1d464d0-f12b-4182-83d4-eeccccfb42c8","Type":"ContainerStarted","Data":"2b230c5f68671783cab04a0926c4b963fcd3f36c3046fe91cb3f0ef3996db1c2"} Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.624841 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.632516 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.758444 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.758788 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759006 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759099 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4sn\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759194 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759316 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759419 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759503 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759593 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759713 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759802 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.759912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760232 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760273 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760311 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760434 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760497 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760590 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhwp\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760693 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760769 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.760821 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862583 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862679 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862708 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862735 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862762 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhwp\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862790 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862821 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862908 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862930 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862954 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv4sn\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.862977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863002 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863048 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863069 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863099 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863125 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863161 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863193 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863333 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863511 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863732 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.863881 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.865073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.865608 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.866973 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.868908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.868926 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.868950 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.869558 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.869685 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.869900 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.869970 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.871375 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.872425 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.873245 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.873641 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.874969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.891993 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv4sn\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.892801 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhwp\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.897156 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.903323 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.929160 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:53:24 crc kubenswrapper[4765]: I1203 20:53:24.947310 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.417848 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:53:25 crc kubenswrapper[4765]: W1203 20:53:25.426006 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fed9c9a_215a_4bd8_9381_6c20099e434d.slice/crio-47da83352684c164934c64ec2de1aa3bc946a62d6945c358c1152591fc2d53fe WatchSource:0}: Error finding container 47da83352684c164934c64ec2de1aa3bc946a62d6945c358c1152591fc2d53fe: Status 404 returned error can't find the container with id 47da83352684c164934c64ec2de1aa3bc946a62d6945c358c1152591fc2d53fe Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.461373 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:53:25 crc kubenswrapper[4765]: W1203 20:53:25.466990 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fa4225_5981_4b62_ac67_674896fbc047.slice/crio-b4535d55aad41d8b619321ee742477798b0064d82a1ddeb1fb0ef78119875659 WatchSource:0}: Error finding container b4535d55aad41d8b619321ee742477798b0064d82a1ddeb1fb0ef78119875659: Status 404 returned error can't find the container with id b4535d55aad41d8b619321ee742477798b0064d82a1ddeb1fb0ef78119875659 Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.617930 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerStarted","Data":"47da83352684c164934c64ec2de1aa3bc946a62d6945c358c1152591fc2d53fe"} Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.619960 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerStarted","Data":"b4535d55aad41d8b619321ee742477798b0064d82a1ddeb1fb0ef78119875659"} Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.704770 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.706571 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.711186 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-c6vvf" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.713146 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.713379 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.713447 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.719412 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.720527 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877707 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877737 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877774 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsr7t\" (UniqueName: \"kubernetes.io/projected/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kube-api-access-fsr7t\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877823 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877843 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.877859 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979008 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsr7t\" (UniqueName: \"kubernetes.io/projected/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kube-api-access-fsr7t\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979072 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979139 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979166 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979249 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979321 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979351 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.979722 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.980088 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.980467 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kolla-config\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.981030 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-config-data-default\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.981474 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.988797 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:25 crc kubenswrapper[4765]: I1203 20:53:25.990761 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:26 crc kubenswrapper[4765]: I1203 20:53:26.005065 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsr7t\" (UniqueName: \"kubernetes.io/projected/1d3f1a32-afd2-49fc-b9cd-b49f14770ab2-kube-api-access-fsr7t\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:26 crc kubenswrapper[4765]: I1203 20:53:26.009210 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2\") " pod="openstack/openstack-galera-0" Dec 03 20:53:26 crc kubenswrapper[4765]: I1203 20:53:26.034020 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 20:53:26 crc kubenswrapper[4765]: I1203 20:53:26.581980 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 20:53:26 crc kubenswrapper[4765]: I1203 20:53:26.640250 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2","Type":"ContainerStarted","Data":"6deed22bd9568ba81b0abe5ed67a268f4d510bcb12bb5a49306c3832ef3da4eb"} Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.047992 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.051043 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.054555 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-28mgp" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.054769 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.055067 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.056488 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.061600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162020 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162272 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162425 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzd7\" (UniqueName: \"kubernetes.io/projected/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kube-api-access-sgzd7\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162588 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162797 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.162900 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.268401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269233 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269781 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269850 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzd7\" (UniqueName: \"kubernetes.io/projected/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kube-api-access-sgzd7\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269892 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.269941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.270073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.271440 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.271981 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.281438 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.281872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.286625 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.321314 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.322759 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.330423 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.348581 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzd7\" (UniqueName: \"kubernetes.io/projected/71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f-kube-api-access-sgzd7\") pod \"openstack-cell1-galera-0\" (UID: \"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f\") " pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.381283 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.392615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.393586 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.402220 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.416957 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.417354 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.417933 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rtcz8" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.577089 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.577169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.577219 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm6v\" (UniqueName: \"kubernetes.io/projected/ffa82a93-b10c-4414-be93-7d003c7917e9-kube-api-access-6fm6v\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.577248 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-kolla-config\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.577286 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-config-data\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.683231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.683314 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.683355 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fm6v\" (UniqueName: \"kubernetes.io/projected/ffa82a93-b10c-4414-be93-7d003c7917e9-kube-api-access-6fm6v\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.683379 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-kolla-config\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.683414 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-config-data\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.684055 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-config-data\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.684496 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffa82a93-b10c-4414-be93-7d003c7917e9-kolla-config\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.689031 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.690202 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa82a93-b10c-4414-be93-7d003c7917e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.701759 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fm6v\" (UniqueName: \"kubernetes.io/projected/ffa82a93-b10c-4414-be93-7d003c7917e9-kube-api-access-6fm6v\") pod \"memcached-0\" (UID: \"ffa82a93-b10c-4414-be93-7d003c7917e9\") " pod="openstack/memcached-0" Dec 03 20:53:27 crc kubenswrapper[4765]: I1203 20:53:27.765173 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.542699 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.543899 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.549284 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-t4m4t" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.552237 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.620758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4r8j\" (UniqueName: \"kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j\") pod \"kube-state-metrics-0\" (UID: \"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7\") " pod="openstack/kube-state-metrics-0" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.722378 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4r8j\" (UniqueName: \"kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j\") pod \"kube-state-metrics-0\" (UID: \"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7\") " pod="openstack/kube-state-metrics-0" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.741036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4r8j\" (UniqueName: \"kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j\") pod \"kube-state-metrics-0\" (UID: \"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7\") " pod="openstack/kube-state-metrics-0" Dec 03 20:53:29 crc kubenswrapper[4765]: I1203 20:53:29.873631 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.260857 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.263099 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.265622 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.266890 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.267681 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-z95nl" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.267763 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.267953 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.268072 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387353 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387413 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387457 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387561 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9txmk\" (UniqueName: \"kubernetes.io/projected/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-kube-api-access-9txmk\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387583 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.387604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489366 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489512 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489558 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9txmk\" (UniqueName: \"kubernetes.io/projected/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-kube-api-access-9txmk\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489590 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489701 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489725 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.489735 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.490650 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.490787 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.490912 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.496036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.496236 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.500369 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.507208 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9txmk\" (UniqueName: \"kubernetes.io/projected/ba74cb76-f80f-4396-9ddb-1eeec6c21fd6-kube-api-access-9txmk\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.510071 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.587196 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.925734 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f85pk"] Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.926726 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.930712 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-96mk4" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.931482 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.932098 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.940968 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk"] Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.960838 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wbnps"] Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.962689 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:33 crc kubenswrapper[4765]: I1203 20:53:33.978354 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wbnps"] Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098346 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmpb2\" (UniqueName: \"kubernetes.io/projected/f08ba0a5-f646-4b38-a53e-687a78bc572e-kube-api-access-dmpb2\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098397 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098429 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-log\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098478 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-ovn-controller-tls-certs\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098518 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-run\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098546 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-combined-ca-bundle\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098577 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9aeba1-759a-41ad-a871-5cfa33de5aae-scripts\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098622 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-etc-ovs\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098655 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f08ba0a5-f646-4b38-a53e-687a78bc572e-scripts\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098686 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-log-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098713 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-lib\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098741 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99l6\" (UniqueName: \"kubernetes.io/projected/2a9aeba1-759a-41ad-a871-5cfa33de5aae-kube-api-access-h99l6\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.098775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200371 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-log-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200426 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-lib\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99l6\" (UniqueName: \"kubernetes.io/projected/2a9aeba1-759a-41ad-a871-5cfa33de5aae-kube-api-access-h99l6\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200490 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200526 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmpb2\" (UniqueName: \"kubernetes.io/projected/f08ba0a5-f646-4b38-a53e-687a78bc572e-kube-api-access-dmpb2\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200545 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.200566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-log\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201045 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201095 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-lib\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201103 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-ovn-controller-tls-certs\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201160 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-run\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201193 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-combined-ca-bundle\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201229 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9aeba1-759a-41ad-a871-5cfa33de5aae-scripts\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201277 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-run\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201466 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-var-log\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201603 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-run-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201624 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a9aeba1-759a-41ad-a871-5cfa33de5aae-var-log-ovn\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-etc-ovs\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201811 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f08ba0a5-f646-4b38-a53e-687a78bc572e-scripts\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.201985 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f08ba0a5-f646-4b38-a53e-687a78bc572e-etc-ovs\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.205280 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a9aeba1-759a-41ad-a871-5cfa33de5aae-scripts\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.205582 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-combined-ca-bundle\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.209386 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f08ba0a5-f646-4b38-a53e-687a78bc572e-scripts\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.216940 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9aeba1-759a-41ad-a871-5cfa33de5aae-ovn-controller-tls-certs\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.217429 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmpb2\" (UniqueName: \"kubernetes.io/projected/f08ba0a5-f646-4b38-a53e-687a78bc572e-kube-api-access-dmpb2\") pod \"ovn-controller-ovs-wbnps\" (UID: \"f08ba0a5-f646-4b38-a53e-687a78bc572e\") " pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.219913 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99l6\" (UniqueName: \"kubernetes.io/projected/2a9aeba1-759a-41ad-a871-5cfa33de5aae-kube-api-access-h99l6\") pod \"ovn-controller-f85pk\" (UID: \"2a9aeba1-759a-41ad-a871-5cfa33de5aae\") " pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.272067 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk" Dec 03 20:53:34 crc kubenswrapper[4765]: I1203 20:53:34.284606 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.084566 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.086513 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.089685 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.091024 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.091380 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.091633 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-s9mnz" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.093196 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.230211 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkt8\" (UniqueName: \"kubernetes.io/projected/ae88784b-a398-447a-aaba-b2c2e1c7dc48-kube-api-access-jmkt8\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.230909 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231050 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231078 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231151 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231179 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231415 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.231574 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333708 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmkt8\" (UniqueName: \"kubernetes.io/projected/ae88784b-a398-447a-aaba-b2c2e1c7dc48-kube-api-access-jmkt8\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333789 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333807 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333823 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333839 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.333867 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.334855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.334965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.335258 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.335622 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae88784b-a398-447a-aaba-b2c2e1c7dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.340166 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.350013 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.350672 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmkt8\" (UniqueName: \"kubernetes.io/projected/ae88784b-a398-447a-aaba-b2c2e1c7dc48-kube-api-access-jmkt8\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.359516 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae88784b-a398-447a-aaba-b2c2e1c7dc48-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.364350 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ae88784b-a398-447a-aaba-b2c2e1c7dc48\") " pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:36 crc kubenswrapper[4765]: I1203 20:53:36.408817 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.743098 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.745333 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.750733 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.853502 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.853897 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.854057 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2n7\" (UniqueName: \"kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.955582 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2n7\" (UniqueName: \"kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.955672 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.955724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.956298 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.956427 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:37 crc kubenswrapper[4765]: I1203 20:53:37.974772 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2n7\" (UniqueName: \"kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7\") pod \"redhat-operators-v5fw7\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:38 crc kubenswrapper[4765]: I1203 20:53:38.073284 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:53:42 crc kubenswrapper[4765]: I1203 20:53:42.180779 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:53:42 crc kubenswrapper[4765]: I1203 20:53:42.261452 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.323542 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.324398 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2dx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8p5tm_openstack(a1d464d0-f12b-4182-83d4-eeccccfb42c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.325965 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.338801 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.338915 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fmlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-r7jqg_openstack(015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.340071 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" podUID="015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.340509 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.340744 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krpzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-cxdww_openstack(ba11afbd-0cb2-4489-b31f-e4092d7a8e14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.342282 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.350876 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.351109 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9trzj_openstack(07d11db5-65c4-4afc-92bf-b402e557991c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.352474 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" podUID="07d11db5-65c4-4afc-92bf-b402e557991c" Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.783988 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk"] Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.798734 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 20:53:45 crc kubenswrapper[4765]: W1203 20:53:45.799257 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71fd51b3_7a6c_4d2a_a39a_93ebcd06da7f.slice/crio-8374b578061bd504002c79b85f2b2d52a9febbc99d4d53d2479710778e687696 WatchSource:0}: Error finding container 8374b578061bd504002c79b85f2b2d52a9febbc99d4d53d2479710778e687696: Status 404 returned error can't find the container with id 8374b578061bd504002c79b85f2b2d52a9febbc99d4d53d2479710778e687696 Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.857679 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7","Type":"ContainerStarted","Data":"5606a55f4755aa3fc90660c00fd67d8e244fdea4f2818ffdd2df82e688e8f6b7"} Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.859290 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f","Type":"ContainerStarted","Data":"8374b578061bd504002c79b85f2b2d52a9febbc99d4d53d2479710778e687696"} Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.861767 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffa82a93-b10c-4414-be93-7d003c7917e9","Type":"ContainerStarted","Data":"fa397ca77c4cdff85931592ff0fd9533afbb9220809b5a820a8ee02d413eee16"} Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.863909 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2","Type":"ContainerStarted","Data":"5cc318c2be95fa20bef42b9513b1851f63a91d99ef1a3ed985c5eb104c5b043a"} Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.865528 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk" event={"ID":"2a9aeba1-759a-41ad-a871-5cfa33de5aae","Type":"ContainerStarted","Data":"66746d559a6031c2f790ba847c7954a1c05874a32d4a4b711943782fa9e1c36a"} Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.868992 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" Dec 03 20:53:45 crc kubenswrapper[4765]: E1203 20:53:45.869623 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" Dec 03 20:53:45 crc kubenswrapper[4765]: I1203 20:53:45.974579 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.074412 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.144910 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.765653 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.794786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config\") pod \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.795007 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fmlt\" (UniqueName: \"kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt\") pod \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\" (UID: \"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a\") " Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.795407 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config" (OuterVolumeSpecName: "config") pod "015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a" (UID: "015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.797167 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.804866 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt" (OuterVolumeSpecName: "kube-api-access-7fmlt") pod "015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a" (UID: "015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a"). InnerVolumeSpecName "kube-api-access-7fmlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.875383 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerStarted","Data":"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.879347 4765 generic.go:334] "Generic (PLEG): container finished" podID="67520fce-47d6-408f-9571-04527e50ad22" containerID="dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6" exitCode=0 Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.879436 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerDied","Data":"dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.879472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerStarted","Data":"09031af52bedb2a41ee2f91c26541ee45a64b9a2e12b798a2cdbf535da2c4837"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.891985 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6","Type":"ContainerStarted","Data":"b79c575f751bca16f96e0967d62db62832c3e083de7a31e187de5cd33d7093d5"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.893067 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ae88784b-a398-447a-aaba-b2c2e1c7dc48","Type":"ContainerStarted","Data":"a02c7207460e1c7324a2f15c48c60c85442159e7ebe9eec3c8871a005be30334"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.895292 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerStarted","Data":"61c07efe7283a35939fa86ec93228bad0cd86cad45e92d2b77fb644dd2ded0af"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.905707 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fmlt\" (UniqueName: \"kubernetes.io/projected/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a-kube-api-access-7fmlt\") on node \"crc\" DevicePath \"\"" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.908821 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f","Type":"ContainerStarted","Data":"3152af336c1b55e02c502eff064ec507f80b9e0aedd223174aa9cc0a54881e35"} Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.911724 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" Dec 03 20:53:46 crc kubenswrapper[4765]: I1203 20:53:46.912762 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r7jqg" event={"ID":"015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a","Type":"ContainerDied","Data":"6286ec113f7c5db48f1673dead88d22a2fd374b8348d6d4623370877b54fbeb3"} Dec 03 20:53:47 crc kubenswrapper[4765]: I1203 20:53:47.027011 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:47 crc kubenswrapper[4765]: I1203 20:53:47.036397 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r7jqg"] Dec 03 20:53:47 crc kubenswrapper[4765]: I1203 20:53:47.140437 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wbnps"] Dec 03 20:53:48 crc kubenswrapper[4765]: I1203 20:53:48.369681 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a" path="/var/lib/kubelet/pods/015956be-4d21-4ea0-9e9e-7e5ddeaf4d4a/volumes" Dec 03 20:53:54 crc kubenswrapper[4765]: W1203 20:53:54.841924 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf08ba0a5_f646_4b38_a53e_687a78bc572e.slice/crio-0d90c91d15593c410d7c56fd524dece901f0a3992b93370aa8b6d425cca5ff50 WatchSource:0}: Error finding container 0d90c91d15593c410d7c56fd524dece901f0a3992b93370aa8b6d425cca5ff50: Status 404 returned error can't find the container with id 0d90c91d15593c410d7c56fd524dece901f0a3992b93370aa8b6d425cca5ff50 Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.878098 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.993870 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wbnps" event={"ID":"f08ba0a5-f646-4b38-a53e-687a78bc572e","Type":"ContainerStarted","Data":"0d90c91d15593c410d7c56fd524dece901f0a3992b93370aa8b6d425cca5ff50"} Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.997111 4765 generic.go:334] "Generic (PLEG): container finished" podID="1d3f1a32-afd2-49fc-b9cd-b49f14770ab2" containerID="5cc318c2be95fa20bef42b9513b1851f63a91d99ef1a3ed985c5eb104c5b043a" exitCode=0 Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.997233 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2","Type":"ContainerDied","Data":"5cc318c2be95fa20bef42b9513b1851f63a91d99ef1a3ed985c5eb104c5b043a"} Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.999193 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" event={"ID":"07d11db5-65c4-4afc-92bf-b402e557991c","Type":"ContainerDied","Data":"0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029"} Dec 03 20:53:54 crc kubenswrapper[4765]: I1203 20:53:54.999279 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9trzj" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.062385 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config\") pod \"07d11db5-65c4-4afc-92bf-b402e557991c\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.062495 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc\") pod \"07d11db5-65c4-4afc-92bf-b402e557991c\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.062584 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjmdl\" (UniqueName: \"kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl\") pod \"07d11db5-65c4-4afc-92bf-b402e557991c\" (UID: \"07d11db5-65c4-4afc-92bf-b402e557991c\") " Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.063315 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07d11db5-65c4-4afc-92bf-b402e557991c" (UID: "07d11db5-65c4-4afc-92bf-b402e557991c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.063417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config" (OuterVolumeSpecName: "config") pod "07d11db5-65c4-4afc-92bf-b402e557991c" (UID: "07d11db5-65c4-4afc-92bf-b402e557991c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.066747 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl" (OuterVolumeSpecName: "kube-api-access-bjmdl") pod "07d11db5-65c4-4afc-92bf-b402e557991c" (UID: "07d11db5-65c4-4afc-92bf-b402e557991c"). InnerVolumeSpecName "kube-api-access-bjmdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.164739 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.164768 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07d11db5-65c4-4afc-92bf-b402e557991c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.164887 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjmdl\" (UniqueName: \"kubernetes.io/projected/07d11db5-65c4-4afc-92bf-b402e557991c-kube-api-access-bjmdl\") on node \"crc\" DevicePath \"\"" Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.378506 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:55 crc kubenswrapper[4765]: I1203 20:53:55.396097 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9trzj"] Dec 03 20:53:55 crc kubenswrapper[4765]: E1203 20:53:55.501507 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07d11db5_65c4_4afc_92bf_b402e557991c.slice/crio-0ac89e8239c5ac82f0d41b304f9eb63c9bbca14801a7522fa8aee598491ba029\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07d11db5_65c4_4afc_92bf_b402e557991c.slice\": RecentStats: unable to find data in memory cache]" Dec 03 20:53:56 crc kubenswrapper[4765]: I1203 20:53:56.008672 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1d3f1a32-afd2-49fc-b9cd-b49f14770ab2","Type":"ContainerStarted","Data":"71aafdd5f8e22a2193de9845d1a3eca80b584c2962604d77566f0d76e3a335d3"} Dec 03 20:53:56 crc kubenswrapper[4765]: I1203 20:53:56.030836 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.26710053 podStartE2EDuration="32.030814295s" podCreationTimestamp="2025-12-03 20:53:24 +0000 UTC" firstStartedPulling="2025-12-03 20:53:26.600226625 +0000 UTC m=+904.530771766" lastFinishedPulling="2025-12-03 20:53:45.36394038 +0000 UTC m=+923.294485531" observedRunningTime="2025-12-03 20:53:56.027017493 +0000 UTC m=+933.957562674" watchObservedRunningTime="2025-12-03 20:53:56.030814295 +0000 UTC m=+933.961359456" Dec 03 20:53:56 crc kubenswrapper[4765]: I1203 20:53:56.035056 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 03 20:53:56 crc kubenswrapper[4765]: I1203 20:53:56.035094 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 03 20:53:56 crc kubenswrapper[4765]: I1203 20:53:56.372577 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d11db5-65c4-4afc-92bf-b402e557991c" path="/var/lib/kubelet/pods/07d11db5-65c4-4afc-92bf-b402e557991c/volumes" Dec 03 20:53:57 crc kubenswrapper[4765]: I1203 20:53:57.022243 4765 generic.go:334] "Generic (PLEG): container finished" podID="71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f" containerID="3152af336c1b55e02c502eff064ec507f80b9e0aedd223174aa9cc0a54881e35" exitCode=0 Dec 03 20:53:57 crc kubenswrapper[4765]: I1203 20:53:57.022378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f","Type":"ContainerDied","Data":"3152af336c1b55e02c502eff064ec507f80b9e0aedd223174aa9cc0a54881e35"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.049030 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ae88784b-a398-447a-aaba-b2c2e1c7dc48","Type":"ContainerStarted","Data":"5f5c37ba4b01d78e97bd285a1f35c5b2eba9f91243964cee6bde1a808688e5f2"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.051038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f","Type":"ContainerStarted","Data":"8cac0554af1faee2d2284ff68c8460cc5d55d0c2b916882dc9b3df5f303fe263"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.054056 4765 generic.go:334] "Generic (PLEG): container finished" podID="67520fce-47d6-408f-9571-04527e50ad22" containerID="c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7" exitCode=0 Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.054124 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerDied","Data":"c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.056778 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6","Type":"ContainerStarted","Data":"3e4383803ded67b9c73a7da3c22c8312de90e41620691bac237bcd6acdb2bd51"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.059270 4765 generic.go:334] "Generic (PLEG): container finished" podID="f08ba0a5-f646-4b38-a53e-687a78bc572e" containerID="d4f32a8aea4d17a1fbd9120bc0e37a10b9053b3965c5189206d325bddcfe38c0" exitCode=0 Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.059362 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wbnps" event={"ID":"f08ba0a5-f646-4b38-a53e-687a78bc572e","Type":"ContainerDied","Data":"d4f32a8aea4d17a1fbd9120bc0e37a10b9053b3965c5189206d325bddcfe38c0"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.062128 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ffa82a93-b10c-4414-be93-7d003c7917e9","Type":"ContainerStarted","Data":"8dbbf1f7c665ff1615bb435bb455da5c2e47d7f32e8af944cba00b374e108356"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.062248 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.063526 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk" event={"ID":"2a9aeba1-759a-41ad-a871-5cfa33de5aae","Type":"ContainerStarted","Data":"224f751f65d14e579f9dd3da5c3661feb616a44fcc8059002b7bdf4b13152638"} Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.063679 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-f85pk" Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.076706 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=33.076690915 podStartE2EDuration="33.076690915s" podCreationTimestamp="2025-12-03 20:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:53:59.071536077 +0000 UTC m=+937.002081278" watchObservedRunningTime="2025-12-03 20:53:59.076690915 +0000 UTC m=+937.007236066" Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.103671 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.335748836 podStartE2EDuration="32.103655911s" podCreationTimestamp="2025-12-03 20:53:27 +0000 UTC" firstStartedPulling="2025-12-03 20:53:45.337097808 +0000 UTC m=+923.267642959" lastFinishedPulling="2025-12-03 20:53:56.105004883 +0000 UTC m=+934.035550034" observedRunningTime="2025-12-03 20:53:59.101291357 +0000 UTC m=+937.031836528" watchObservedRunningTime="2025-12-03 20:53:59.103655911 +0000 UTC m=+937.034201062" Dec 03 20:53:59 crc kubenswrapper[4765]: I1203 20:53:59.122542 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-f85pk" podStartSLOduration=13.932579226 podStartE2EDuration="26.12252811s" podCreationTimestamp="2025-12-03 20:53:33 +0000 UTC" firstStartedPulling="2025-12-03 20:53:45.787584974 +0000 UTC m=+923.718130125" lastFinishedPulling="2025-12-03 20:53:57.977533858 +0000 UTC m=+935.908079009" observedRunningTime="2025-12-03 20:53:59.117225377 +0000 UTC m=+937.047770528" watchObservedRunningTime="2025-12-03 20:53:59.12252811 +0000 UTC m=+937.053073261" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.073607 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wbnps" event={"ID":"f08ba0a5-f646-4b38-a53e-687a78bc572e","Type":"ContainerStarted","Data":"c2e5fc98d57d458507087610f8f7417117fb2cb1d92fbdeffaf86a2ab586ee4b"} Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.074067 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wbnps" event={"ID":"f08ba0a5-f646-4b38-a53e-687a78bc572e","Type":"ContainerStarted","Data":"1be24a24a069159a9214c7a2f41373ebdc4a0be88895793172d523d88e31fa4d"} Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.074085 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.074096 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.075912 4765 generic.go:334] "Generic (PLEG): container finished" podID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerID="4ac37473f2764ac2ea4b550d56b877072d3cf452ca9dd8effaabbdc24b2d0e4c" exitCode=0 Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.075964 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" event={"ID":"ba11afbd-0cb2-4489-b31f-e4092d7a8e14","Type":"ContainerDied","Data":"4ac37473f2764ac2ea4b550d56b877072d3cf452ca9dd8effaabbdc24b2d0e4c"} Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.077536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7","Type":"ContainerStarted","Data":"0b175116666f690f63043dc4ae2bf28765d421d3174452db555139f9aba60604"} Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.077597 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.082005 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerStarted","Data":"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838"} Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.100802 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wbnps" podStartSLOduration=23.97012389 podStartE2EDuration="27.100782612s" podCreationTimestamp="2025-12-03 20:53:33 +0000 UTC" firstStartedPulling="2025-12-03 20:53:54.844987555 +0000 UTC m=+932.775532706" lastFinishedPulling="2025-12-03 20:53:57.975646277 +0000 UTC m=+935.906191428" observedRunningTime="2025-12-03 20:54:00.098870651 +0000 UTC m=+938.029415802" watchObservedRunningTime="2025-12-03 20:54:00.100782612 +0000 UTC m=+938.031327773" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.141776 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.158502 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.348971987 podStartE2EDuration="31.158478546s" podCreationTimestamp="2025-12-03 20:53:29 +0000 UTC" firstStartedPulling="2025-12-03 20:53:45.326215395 +0000 UTC m=+923.256760546" lastFinishedPulling="2025-12-03 20:53:59.135721934 +0000 UTC m=+937.066267105" observedRunningTime="2025-12-03 20:54:00.148583399 +0000 UTC m=+938.079128560" watchObservedRunningTime="2025-12-03 20:54:00.158478546 +0000 UTC m=+938.089023717" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.163425 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v5fw7" podStartSLOduration=18.478007615 podStartE2EDuration="23.163406798s" podCreationTimestamp="2025-12-03 20:53:37 +0000 UTC" firstStartedPulling="2025-12-03 20:53:54.782285267 +0000 UTC m=+932.712830428" lastFinishedPulling="2025-12-03 20:53:59.46768446 +0000 UTC m=+937.398229611" observedRunningTime="2025-12-03 20:54:00.134587012 +0000 UTC m=+938.065132163" watchObservedRunningTime="2025-12-03 20:54:00.163406798 +0000 UTC m=+938.093951949" Dec 03 20:54:00 crc kubenswrapper[4765]: I1203 20:54:00.232732 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 03 20:54:01 crc kubenswrapper[4765]: I1203 20:54:01.094110 4765 generic.go:334] "Generic (PLEG): container finished" podID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerID="e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f" exitCode=0 Dec 03 20:54:01 crc kubenswrapper[4765]: I1203 20:54:01.094212 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" event={"ID":"a1d464d0-f12b-4182-83d4-eeccccfb42c8","Type":"ContainerDied","Data":"e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f"} Dec 03 20:54:01 crc kubenswrapper[4765]: I1203 20:54:01.097754 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" event={"ID":"ba11afbd-0cb2-4489-b31f-e4092d7a8e14","Type":"ContainerStarted","Data":"e1c00ed04459533597ccf45e65020358709f7a34a87368018b0f83677f1e4201"} Dec 03 20:54:01 crc kubenswrapper[4765]: I1203 20:54:01.137651 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" podStartSLOduration=3.913355595 podStartE2EDuration="39.137631563s" podCreationTimestamp="2025-12-03 20:53:22 +0000 UTC" firstStartedPulling="2025-12-03 20:53:23.92569718 +0000 UTC m=+901.856242321" lastFinishedPulling="2025-12-03 20:53:59.149973148 +0000 UTC m=+937.080518289" observedRunningTime="2025-12-03 20:54:01.132101224 +0000 UTC m=+939.062646405" watchObservedRunningTime="2025-12-03 20:54:01.137631563 +0000 UTC m=+939.068176714" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.113552 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba74cb76-f80f-4396-9ddb-1eeec6c21fd6","Type":"ContainerStarted","Data":"a3c34a31e6968749ae489792923806d2f419fb34ac64edba76c6e06f216e7b31"} Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.116084 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" event={"ID":"a1d464d0-f12b-4182-83d4-eeccccfb42c8","Type":"ContainerStarted","Data":"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77"} Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.116314 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.117648 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ae88784b-a398-447a-aaba-b2c2e1c7dc48","Type":"ContainerStarted","Data":"85cb940b25c6b7074a87edd2858da126df9ba7e1fc4a50bb4ad90e607f5d0f9e"} Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.140713 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.671301123 podStartE2EDuration="31.140684862s" podCreationTimestamp="2025-12-03 20:53:32 +0000 UTC" firstStartedPulling="2025-12-03 20:53:45.938762504 +0000 UTC m=+923.869307655" lastFinishedPulling="2025-12-03 20:54:02.408146243 +0000 UTC m=+940.338691394" observedRunningTime="2025-12-03 20:54:03.13764928 +0000 UTC m=+941.068194431" watchObservedRunningTime="2025-12-03 20:54:03.140684862 +0000 UTC m=+941.071230053" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.162967 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.94946671 podStartE2EDuration="28.162952781s" podCreationTimestamp="2025-12-03 20:53:35 +0000 UTC" firstStartedPulling="2025-12-03 20:53:46.181639661 +0000 UTC m=+924.112184812" lastFinishedPulling="2025-12-03 20:54:02.395125732 +0000 UTC m=+940.325670883" observedRunningTime="2025-12-03 20:54:03.160153586 +0000 UTC m=+941.090698737" watchObservedRunningTime="2025-12-03 20:54:03.162952781 +0000 UTC m=+941.093497932" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.186950 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" podStartSLOduration=-9223371996.667841 podStartE2EDuration="40.186935237s" podCreationTimestamp="2025-12-03 20:53:23 +0000 UTC" firstStartedPulling="2025-12-03 20:53:24.251098038 +0000 UTC m=+902.181643189" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:03.186880095 +0000 UTC m=+941.117425236" watchObservedRunningTime="2025-12-03 20:54:03.186935237 +0000 UTC m=+941.117480388" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.253497 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.409049 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.456979 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.588660 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.588712 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 03 20:54:03 crc kubenswrapper[4765]: I1203 20:54:03.639401 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.124976 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.162032 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.169505 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.312023 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.313505 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.379006 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.441439 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.441520 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxh9\" (UniqueName: \"kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.441549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.454422 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.454611 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="dnsmasq-dns" containerID="cri-o://e1c00ed04459533597ccf45e65020358709f7a34a87368018b0f83677f1e4201" gracePeriod=10 Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.501341 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.502484 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.505460 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.520363 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545320 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxh9\" (UniqueName: \"kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545377 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545412 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545445 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545495 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545519 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc65m\" (UniqueName: \"kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.545587 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.561178 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.561643 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.602671 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4b7vx"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.604399 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.610737 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.661742 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.661990 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662016 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovs-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662037 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv5q\" (UniqueName: \"kubernetes.io/projected/70312ced-15b1-4366-aa36-c32538b61141-kube-api-access-7jv5q\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662063 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662080 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70312ced-15b1-4366-aa36-c32538b61141-config\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662099 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc65m\" (UniqueName: \"kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662124 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-combined-ca-bundle\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662160 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovn-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662181 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.662931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.661797 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxh9\" (UniqueName: \"kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9\") pod \"redhat-marketplace-rqsmp\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.663479 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.664072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.676878 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4b7vx"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.706177 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc65m\" (UniqueName: \"kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m\") pod \"dnsmasq-dns-6bc7876d45-kgg7k\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.748408 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.763962 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovn-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764010 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764081 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovs-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764109 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jv5q\" (UniqueName: \"kubernetes.io/projected/70312ced-15b1-4366-aa36-c32538b61141-kube-api-access-7jv5q\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764135 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70312ced-15b1-4366-aa36-c32538b61141-config\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764161 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-combined-ca-bundle\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.764802 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.765170 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovs-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.765252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/70312ced-15b1-4366-aa36-c32538b61141-ovn-rundir\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.766046 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70312ced-15b1-4366-aa36-c32538b61141-config\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.766259 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.777205 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.777483 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.777819 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.777869 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pkdfk" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.777970 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.784626 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.786121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.787185 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70312ced-15b1-4366-aa36-c32538b61141-combined-ca-bundle\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.795681 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.815021 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.815881 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jv5q\" (UniqueName: \"kubernetes.io/projected/70312ced-15b1-4366-aa36-c32538b61141-kube-api-access-7jv5q\") pod \"ovn-controller-metrics-4b7vx\" (UID: \"70312ced-15b1-4366-aa36-c32538b61141\") " pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.816216 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.838696 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865148 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865175 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmt7\" (UniqueName: \"kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865220 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865240 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5mm\" (UniqueName: \"kubernetes.io/projected/403709eb-a3d4-4e89-ac92-de401056e3d0-kube-api-access-8d5mm\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865385 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865426 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865452 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-scripts\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865494 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-config\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865587 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.865619 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.935324 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.968833 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.968915 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.968978 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969021 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmt7\" (UniqueName: \"kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969096 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969122 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d5mm\" (UniqueName: \"kubernetes.io/projected/403709eb-a3d4-4e89-ac92-de401056e3d0-kube-api-access-8d5mm\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969151 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969211 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969254 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-scripts\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969288 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-config\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.969375 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.970731 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.971365 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.972014 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.973119 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.978036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-scripts\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.979915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4b7vx" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.980239 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/403709eb-a3d4-4e89-ac92-de401056e3d0-config\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.980623 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.982747 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.985046 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403709eb-a3d4-4e89-ac92-de401056e3d0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.988789 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/403709eb-a3d4-4e89-ac92-de401056e3d0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:04 crc kubenswrapper[4765]: I1203 20:54:04.988877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmt7\" (UniqueName: \"kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7\") pod \"dnsmasq-dns-8554648995-tskk6\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.001880 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d5mm\" (UniqueName: \"kubernetes.io/projected/403709eb-a3d4-4e89-ac92-de401056e3d0-kube-api-access-8d5mm\") pod \"ovn-northd-0\" (UID: \"403709eb-a3d4-4e89-ac92-de401056e3d0\") " pod="openstack/ovn-northd-0" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.141463 4765 generic.go:334] "Generic (PLEG): container finished" podID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerID="e1c00ed04459533597ccf45e65020358709f7a34a87368018b0f83677f1e4201" exitCode=0 Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.141626 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="dnsmasq-dns" containerID="cri-o://ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77" gracePeriod=10 Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.141862 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" event={"ID":"ba11afbd-0cb2-4489-b31f-e4092d7a8e14","Type":"ContainerDied","Data":"e1c00ed04459533597ccf45e65020358709f7a34a87368018b0f83677f1e4201"} Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.173817 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.251911 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.330766 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.430196 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.490826 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpzc\" (UniqueName: \"kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc\") pod \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.490995 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc\") pod \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.491037 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config\") pod \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\" (UID: \"ba11afbd-0cb2-4489-b31f-e4092d7a8e14\") " Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.492432 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.495875 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc" (OuterVolumeSpecName: "kube-api-access-krpzc") pod "ba11afbd-0cb2-4489-b31f-e4092d7a8e14" (UID: "ba11afbd-0cb2-4489-b31f-e4092d7a8e14"). InnerVolumeSpecName "kube-api-access-krpzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:05 crc kubenswrapper[4765]: W1203 20:54:05.516535 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10d43e32_c91c_4a30_ba5a_de0c3d1b0800.slice/crio-cb456d73e70001f01597fd9b194fedb2baa9fef76277070ecdac77578ed212c1 WatchSource:0}: Error finding container cb456d73e70001f01597fd9b194fedb2baa9fef76277070ecdac77578ed212c1: Status 404 returned error can't find the container with id cb456d73e70001f01597fd9b194fedb2baa9fef76277070ecdac77578ed212c1 Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.564426 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba11afbd-0cb2-4489-b31f-e4092d7a8e14" (UID: "ba11afbd-0cb2-4489-b31f-e4092d7a8e14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.595093 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpzc\" (UniqueName: \"kubernetes.io/projected/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-kube-api-access-krpzc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.595141 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.602933 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config" (OuterVolumeSpecName: "config") pod "ba11afbd-0cb2-4489-b31f-e4092d7a8e14" (UID: "ba11afbd-0cb2-4489-b31f-e4092d7a8e14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.633812 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4b7vx"] Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.697743 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba11afbd-0cb2-4489-b31f-e4092d7a8e14-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.854504 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 20:54:05 crc kubenswrapper[4765]: I1203 20:54:05.883588 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:54:05 crc kubenswrapper[4765]: W1203 20:54:05.916011 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403709eb_a3d4_4e89_ac92_de401056e3d0.slice/crio-10ba4e1bb81ed85672bdcc6ebbaf4b98a5edbd6aa75aab2093736f9983c37c56 WatchSource:0}: Error finding container 10ba4e1bb81ed85672bdcc6ebbaf4b98a5edbd6aa75aab2093736f9983c37c56: Status 404 returned error can't find the container with id 10ba4e1bb81ed85672bdcc6ebbaf4b98a5edbd6aa75aab2093736f9983c37c56 Dec 03 20:54:05 crc kubenswrapper[4765]: W1203 20:54:05.916600 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf893cff2_cfbc_4c12_9781_c2d6a7f3905f.slice/crio-c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990 WatchSource:0}: Error finding container c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990: Status 404 returned error can't find the container with id c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990 Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.028669 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.149990 4765 generic.go:334] "Generic (PLEG): container finished" podID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerID="926a0eb69d8c21a3fb395ea78e4fe1c4dd471f42f26d1eed07be652dd5562a58" exitCode=0 Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.150162 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" event={"ID":"eaca0f78-34b4-469b-b973-0ab96adfc5fe","Type":"ContainerDied","Data":"926a0eb69d8c21a3fb395ea78e4fe1c4dd471f42f26d1eed07be652dd5562a58"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.150947 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" event={"ID":"eaca0f78-34b4-469b-b973-0ab96adfc5fe","Type":"ContainerStarted","Data":"1e2ccd18d199d5232b79f4c1c6c9ef646928da4239277d54c20c08053f70d958"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.156284 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerStarted","Data":"1e798ff07b02f21ef14f996dadc464a3e5b7ee8b826d134816c4ac542a018c17"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.156442 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerStarted","Data":"cb456d73e70001f01597fd9b194fedb2baa9fef76277070ecdac77578ed212c1"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.161939 4765 generic.go:334] "Generic (PLEG): container finished" podID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerID="ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77" exitCode=0 Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.162020 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" event={"ID":"a1d464d0-f12b-4182-83d4-eeccccfb42c8","Type":"ContainerDied","Data":"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.162053 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" event={"ID":"a1d464d0-f12b-4182-83d4-eeccccfb42c8","Type":"ContainerDied","Data":"2b230c5f68671783cab04a0926c4b963fcd3f36c3046fe91cb3f0ef3996db1c2"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.162072 4765 scope.go:117] "RemoveContainer" containerID="ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.162206 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8p5tm" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.169235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerStarted","Data":"544dc13376d19749c3b01f3871ab7c9287a388fad05317770a10500c950337a5"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.169444 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerStarted","Data":"c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.177517 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" event={"ID":"ba11afbd-0cb2-4489-b31f-e4092d7a8e14","Type":"ContainerDied","Data":"c17096e19c25946fad06bc8efc358399f56817808e406cf3891389b033d1364a"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.177610 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cxdww" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.181152 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"403709eb-a3d4-4e89-ac92-de401056e3d0","Type":"ContainerStarted","Data":"10ba4e1bb81ed85672bdcc6ebbaf4b98a5edbd6aa75aab2093736f9983c37c56"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.194061 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4b7vx" event={"ID":"70312ced-15b1-4366-aa36-c32538b61141","Type":"ContainerStarted","Data":"e1cebabcf8a57d23c5bc92690af60b9cc060f67d05d29ddc767089c7a92e0144"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.194173 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4b7vx" event={"ID":"70312ced-15b1-4366-aa36-c32538b61141","Type":"ContainerStarted","Data":"61b328f6a973bf2883b8cb1ade35863e5fd4d5026f8c9d39f310f7a28a335fa5"} Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.205649 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc\") pod \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.205755 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config\") pod \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.205913 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2dx9\" (UniqueName: \"kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9\") pod \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\" (UID: \"a1d464d0-f12b-4182-83d4-eeccccfb42c8\") " Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.227920 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9" (OuterVolumeSpecName: "kube-api-access-k2dx9") pod "a1d464d0-f12b-4182-83d4-eeccccfb42c8" (UID: "a1d464d0-f12b-4182-83d4-eeccccfb42c8"). InnerVolumeSpecName "kube-api-access-k2dx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.234276 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2dx9\" (UniqueName: \"kubernetes.io/projected/a1d464d0-f12b-4182-83d4-eeccccfb42c8-kube-api-access-k2dx9\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.261851 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4b7vx" podStartSLOduration=2.261832628 podStartE2EDuration="2.261832628s" podCreationTimestamp="2025-12-03 20:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:06.259663879 +0000 UTC m=+944.190209030" watchObservedRunningTime="2025-12-03 20:54:06.261832628 +0000 UTC m=+944.192377779" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.267974 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config" (OuterVolumeSpecName: "config") pod "a1d464d0-f12b-4182-83d4-eeccccfb42c8" (UID: "a1d464d0-f12b-4182-83d4-eeccccfb42c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.287504 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1d464d0-f12b-4182-83d4-eeccccfb42c8" (UID: "a1d464d0-f12b-4182-83d4-eeccccfb42c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.297336 4765 scope.go:117] "RemoveContainer" containerID="e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.327344 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.335110 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.335148 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d464d0-f12b-4182-83d4-eeccccfb42c8-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.335395 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cxdww"] Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.348221 4765 scope.go:117] "RemoveContainer" containerID="ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77" Dec 03 20:54:06 crc kubenswrapper[4765]: E1203 20:54:06.348715 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77\": container with ID starting with ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77 not found: ID does not exist" containerID="ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.348749 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77"} err="failed to get container status \"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77\": rpc error: code = NotFound desc = could not find container \"ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77\": container with ID starting with ab43e86d718dfa6e82f342e5f101e0c30c8b8c14ef35a3d667a1425afa2a9c77 not found: ID does not exist" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.348769 4765 scope.go:117] "RemoveContainer" containerID="e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f" Dec 03 20:54:06 crc kubenswrapper[4765]: E1203 20:54:06.349159 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f\": container with ID starting with e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f not found: ID does not exist" containerID="e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.349208 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f"} err="failed to get container status \"e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f\": rpc error: code = NotFound desc = could not find container \"e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f\": container with ID starting with e0dfa99af99c40f73c0e463d9db1ac68d957204d62862d378690294d48e3d18f not found: ID does not exist" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.349238 4765 scope.go:117] "RemoveContainer" containerID="e1c00ed04459533597ccf45e65020358709f7a34a87368018b0f83677f1e4201" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.372843 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" path="/var/lib/kubelet/pods/ba11afbd-0cb2-4489-b31f-e4092d7a8e14/volumes" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.379536 4765 scope.go:117] "RemoveContainer" containerID="4ac37473f2764ac2ea4b550d56b877072d3cf452ca9dd8effaabbdc24b2d0e4c" Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.478023 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:54:06 crc kubenswrapper[4765]: I1203 20:54:06.483155 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8p5tm"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.204756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" event={"ID":"eaca0f78-34b4-469b-b973-0ab96adfc5fe","Type":"ContainerStarted","Data":"233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87"} Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.205083 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.207251 4765 generic.go:334] "Generic (PLEG): container finished" podID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerID="1e798ff07b02f21ef14f996dadc464a3e5b7ee8b826d134816c4ac542a018c17" exitCode=0 Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.207334 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerDied","Data":"1e798ff07b02f21ef14f996dadc464a3e5b7ee8b826d134816c4ac542a018c17"} Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.209230 4765 generic.go:334] "Generic (PLEG): container finished" podID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerID="544dc13376d19749c3b01f3871ab7c9287a388fad05317770a10500c950337a5" exitCode=0 Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.209285 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerDied","Data":"544dc13376d19749c3b01f3871ab7c9287a388fad05317770a10500c950337a5"} Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.225955 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" podStartSLOduration=3.225936289 podStartE2EDuration="3.225936289s" podCreationTimestamp="2025-12-03 20:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:07.22334877 +0000 UTC m=+945.153893921" watchObservedRunningTime="2025-12-03 20:54:07.225936289 +0000 UTC m=+945.156481440" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.375395 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9506-account-create-update-95fmt"] Dec 03 20:54:07 crc kubenswrapper[4765]: E1203 20:54:07.375764 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.375779 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: E1203 20:54:07.375804 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="init" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.375810 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="init" Dec 03 20:54:07 crc kubenswrapper[4765]: E1203 20:54:07.375825 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="init" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.375842 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="init" Dec 03 20:54:07 crc kubenswrapper[4765]: E1203 20:54:07.375855 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.375864 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.376037 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba11afbd-0cb2-4489-b31f-e4092d7a8e14" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.376068 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" containerName="dnsmasq-dns" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.376645 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.378897 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.382993 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.384189 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.396243 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7gs2x"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.397471 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.410189 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9506-account-create-update-95fmt"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.425050 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7gs2x"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.464922 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.563171 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6zz\" (UniqueName: \"kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.563241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.563287 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9czs\" (UniqueName: \"kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.563704 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.568400 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8vsmb"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.569386 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.574107 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8vsmb"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665291 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6d4\" (UniqueName: \"kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665372 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665476 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9czs\" (UniqueName: \"kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665564 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.665858 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6zz\" (UniqueName: \"kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.666552 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.666553 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.678163 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e2f8-account-create-update-qxcsc"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.679182 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.681001 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.685925 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e2f8-account-create-update-qxcsc"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.688251 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9czs\" (UniqueName: \"kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs\") pod \"keystone-db-create-7gs2x\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.700635 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6zz\" (UniqueName: \"kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz\") pod \"keystone-9506-account-create-update-95fmt\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.753288 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.754573 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.766463 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.766921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sxp2\" (UniqueName: \"kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.766970 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.767374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl6d4\" (UniqueName: \"kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.767454 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.768113 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.791671 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl6d4\" (UniqueName: \"kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4\") pod \"placement-db-create-8vsmb\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.871185 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sxp2\" (UniqueName: \"kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.871555 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.874356 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.887644 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.928515 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sxp2\" (UniqueName: \"kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2\") pod \"placement-e2f8-account-create-update-qxcsc\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.957670 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.983661 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-z8wmz"] Dec 03 20:54:07 crc kubenswrapper[4765]: I1203 20:54:07.984637 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.024712 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z8wmz"] Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.073483 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.074626 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.074674 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cm4\" (UniqueName: \"kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.075059 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.089364 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3ba2-account-create-update-xx6xz"] Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.092392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.095957 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.101446 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3ba2-account-create-update-xx6xz"] Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.176851 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.180205 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.180367 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzp7\" (UniqueName: \"kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.180426 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.180471 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2cm4\" (UniqueName: \"kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.182400 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.201633 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2cm4\" (UniqueName: \"kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4\") pod \"glance-db-create-z8wmz\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.234178 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"403709eb-a3d4-4e89-ac92-de401056e3d0","Type":"ContainerStarted","Data":"e65a3d8065151a42605a4d21ac017b599cc91568361ae2c71a5a5161610b6d39"} Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.245202 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerStarted","Data":"c6201405caa4c6d9e9143d40e0628d2250b0444f76f623392a2d92e8d55172f7"} Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.249199 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerStarted","Data":"4adda8522a70a47d37db5d1f4816fda7d06778fa3cf88a2519c43041d335f1c2"} Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.250095 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.281837 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzp7\" (UniqueName: \"kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.281938 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.285441 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-tskk6" podStartSLOduration=4.28542166 podStartE2EDuration="4.28542166s" podCreationTimestamp="2025-12-03 20:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:08.281012881 +0000 UTC m=+946.211558032" watchObservedRunningTime="2025-12-03 20:54:08.28542166 +0000 UTC m=+946.215966811" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.297833 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.305877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzp7\" (UniqueName: \"kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7\") pod \"glance-3ba2-account-create-update-xx6xz\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.320286 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9506-account-create-update-95fmt"] Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.333682 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.334119 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.343847 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.377683 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d464d0-f12b-4182-83d4-eeccccfb42c8" path="/var/lib/kubelet/pods/a1d464d0-f12b-4182-83d4-eeccccfb42c8/volumes" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.391524 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8vsmb"] Dec 03 20:54:08 crc kubenswrapper[4765]: W1203 20:54:08.393789 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfccef2c9_a838_4fbf_a2b7_275ba5803488.slice/crio-03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23 WatchSource:0}: Error finding container 03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23: Status 404 returned error can't find the container with id 03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23 Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.456824 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7gs2x"] Dec 03 20:54:08 crc kubenswrapper[4765]: W1203 20:54:08.460076 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3823c130_196f_4c3b_9028_301443274ef4.slice/crio-40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb WatchSource:0}: Error finding container 40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb: Status 404 returned error can't find the container with id 40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.467012 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.485022 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e2f8-account-create-update-qxcsc"] Dec 03 20:54:08 crc kubenswrapper[4765]: I1203 20:54:08.785853 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z8wmz"] Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.009016 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3ba2-account-create-update-xx6xz"] Dec 03 20:54:09 crc kubenswrapper[4765]: W1203 20:54:09.013234 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9120997_5314_4a15_87c9_1315f6adbef3.slice/crio-ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db WatchSource:0}: Error finding container ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db: Status 404 returned error can't find the container with id ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.261548 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7gs2x" event={"ID":"3823c130-196f-4c3b-9028-301443274ef4","Type":"ContainerStarted","Data":"1d1ab15d700120e2e0235f728ea9e89a43ddf35cbe9af8bf0cd08563672498d2"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.261596 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7gs2x" event={"ID":"3823c130-196f-4c3b-9028-301443274ef4","Type":"ContainerStarted","Data":"40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.262957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3ba2-account-create-update-xx6xz" event={"ID":"c9120997-5314-4a15-87c9-1315f6adbef3","Type":"ContainerStarted","Data":"ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.265978 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z8wmz" event={"ID":"dffedbc1-9f3a-46a0-9888-bb249ecc9670","Type":"ContainerStarted","Data":"2f7da036ac56feacc2579fe2f82068a6dd6c30e6ae622215f77b17863d964885"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.266034 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z8wmz" event={"ID":"dffedbc1-9f3a-46a0-9888-bb249ecc9670","Type":"ContainerStarted","Data":"44bae944a1981fc048d81ea6e16a9fab763e2cf253fd2b097c3fbc4bd10dbc03"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.268284 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8vsmb" event={"ID":"fccef2c9-a838-4fbf-a2b7-275ba5803488","Type":"ContainerStarted","Data":"37b493717e010ef3584e8ef7c84ca53ac8a99279b339528c4c30594094e68d9a"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.268369 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8vsmb" event={"ID":"fccef2c9-a838-4fbf-a2b7-275ba5803488","Type":"ContainerStarted","Data":"03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.270239 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2f8-account-create-update-qxcsc" event={"ID":"ba06d7a8-a247-4572-ae04-7e29248e3878","Type":"ContainerStarted","Data":"d97eec23e76d8f99d1f99220da1919b035a9a0623f3bde643a80187642f9a021"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.270286 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2f8-account-create-update-qxcsc" event={"ID":"ba06d7a8-a247-4572-ae04-7e29248e3878","Type":"ContainerStarted","Data":"df2e59367f929e75a7fc57edc2b8b57dfddc2cbe162d977b6e890ca94b17b6b7"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.272519 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"403709eb-a3d4-4e89-ac92-de401056e3d0","Type":"ContainerStarted","Data":"35f5d0e0d04f0cd306bd3069cb87b2c30644ee3db8ca4de0e57db62de1a70ba8"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.272692 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.275166 4765 generic.go:334] "Generic (PLEG): container finished" podID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerID="c6201405caa4c6d9e9143d40e0628d2250b0444f76f623392a2d92e8d55172f7" exitCode=0 Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.275246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerDied","Data":"c6201405caa4c6d9e9143d40e0628d2250b0444f76f623392a2d92e8d55172f7"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.278000 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9506-account-create-update-95fmt" event={"ID":"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e","Type":"ContainerStarted","Data":"c97172b65dd25f1b607a42915a770d24ebced514ad8d6dd66db892355f9a7c40"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.278097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9506-account-create-update-95fmt" event={"ID":"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e","Type":"ContainerStarted","Data":"19918549efa24f16e2969f94f1bed68c48b9398e3d4413dd37257fe99ba4f95a"} Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.284697 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-7gs2x" podStartSLOduration=2.284666906 podStartE2EDuration="2.284666906s" podCreationTimestamp="2025-12-03 20:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:09.282093957 +0000 UTC m=+947.212639118" watchObservedRunningTime="2025-12-03 20:54:09.284666906 +0000 UTC m=+947.215212067" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.300714 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e2f8-account-create-update-qxcsc" podStartSLOduration=2.300688228 podStartE2EDuration="2.300688228s" podCreationTimestamp="2025-12-03 20:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:09.295367045 +0000 UTC m=+947.225912196" watchObservedRunningTime="2025-12-03 20:54:09.300688228 +0000 UTC m=+947.231233389" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.317545 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.784937356 podStartE2EDuration="5.317524961s" podCreationTimestamp="2025-12-03 20:54:04 +0000 UTC" firstStartedPulling="2025-12-03 20:54:05.918674921 +0000 UTC m=+943.849220072" lastFinishedPulling="2025-12-03 20:54:07.451262526 +0000 UTC m=+945.381807677" observedRunningTime="2025-12-03 20:54:09.314780127 +0000 UTC m=+947.245325298" watchObservedRunningTime="2025-12-03 20:54:09.317524961 +0000 UTC m=+947.248070112" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.367884 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9506-account-create-update-95fmt" podStartSLOduration=2.367846846 podStartE2EDuration="2.367846846s" podCreationTimestamp="2025-12-03 20:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:09.352796291 +0000 UTC m=+947.283341442" watchObservedRunningTime="2025-12-03 20:54:09.367846846 +0000 UTC m=+947.298391987" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.391789 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8vsmb" podStartSLOduration=2.3917566900000002 podStartE2EDuration="2.39175669s" podCreationTimestamp="2025-12-03 20:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:09.380102165 +0000 UTC m=+947.310647316" watchObservedRunningTime="2025-12-03 20:54:09.39175669 +0000 UTC m=+947.322301841" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.398505 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-z8wmz" podStartSLOduration=2.39847571 podStartE2EDuration="2.39847571s" podCreationTimestamp="2025-12-03 20:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:09.393989999 +0000 UTC m=+947.324535150" watchObservedRunningTime="2025-12-03 20:54:09.39847571 +0000 UTC m=+947.329020861" Dec 03 20:54:09 crc kubenswrapper[4765]: I1203 20:54:09.877680 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 20:54:10 crc kubenswrapper[4765]: I1203 20:54:10.084566 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.302133 4765 generic.go:334] "Generic (PLEG): container finished" podID="fccef2c9-a838-4fbf-a2b7-275ba5803488" containerID="37b493717e010ef3584e8ef7c84ca53ac8a99279b339528c4c30594094e68d9a" exitCode=0 Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.302208 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8vsmb" event={"ID":"fccef2c9-a838-4fbf-a2b7-275ba5803488","Type":"ContainerDied","Data":"37b493717e010ef3584e8ef7c84ca53ac8a99279b339528c4c30594094e68d9a"} Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.304091 4765 generic.go:334] "Generic (PLEG): container finished" podID="dffedbc1-9f3a-46a0-9888-bb249ecc9670" containerID="2f7da036ac56feacc2579fe2f82068a6dd6c30e6ae622215f77b17863d964885" exitCode=0 Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.304127 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z8wmz" event={"ID":"dffedbc1-9f3a-46a0-9888-bb249ecc9670","Type":"ContainerDied","Data":"2f7da036ac56feacc2579fe2f82068a6dd6c30e6ae622215f77b17863d964885"} Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.307605 4765 generic.go:334] "Generic (PLEG): container finished" podID="3823c130-196f-4c3b-9028-301443274ef4" containerID="1d1ab15d700120e2e0235f728ea9e89a43ddf35cbe9af8bf0cd08563672498d2" exitCode=0 Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.307648 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7gs2x" event={"ID":"3823c130-196f-4c3b-9028-301443274ef4","Type":"ContainerDied","Data":"1d1ab15d700120e2e0235f728ea9e89a43ddf35cbe9af8bf0cd08563672498d2"} Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.308996 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3ba2-account-create-update-xx6xz" event={"ID":"c9120997-5314-4a15-87c9-1315f6adbef3","Type":"ContainerStarted","Data":"8b13c33535ca3eee8601014d14da978b020799aa49fb4e4a3c7d7ee0b33133fb"} Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.309129 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v5fw7" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="registry-server" containerID="cri-o://00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838" gracePeriod=2 Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.338664 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3ba2-account-create-update-xx6xz" podStartSLOduration=3.338643086 podStartE2EDuration="3.338643086s" podCreationTimestamp="2025-12-03 20:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:11.334771082 +0000 UTC m=+949.265316243" watchObservedRunningTime="2025-12-03 20:54:11.338643086 +0000 UTC m=+949.269188247" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.507439 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.509623 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.515895 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.615179 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vm9\" (UniqueName: \"kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.615468 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.615499 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.717216 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.717266 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.717328 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vm9\" (UniqueName: \"kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.718157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.718239 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.743399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vm9\" (UniqueName: \"kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9\") pod \"certified-operators-wgdzp\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.826669 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.835768 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.918643 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content\") pod \"67520fce-47d6-408f-9571-04527e50ad22\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.918717 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg2n7\" (UniqueName: \"kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7\") pod \"67520fce-47d6-408f-9571-04527e50ad22\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.918794 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities\") pod \"67520fce-47d6-408f-9571-04527e50ad22\" (UID: \"67520fce-47d6-408f-9571-04527e50ad22\") " Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.919530 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities" (OuterVolumeSpecName: "utilities") pod "67520fce-47d6-408f-9571-04527e50ad22" (UID: "67520fce-47d6-408f-9571-04527e50ad22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:11 crc kubenswrapper[4765]: I1203 20:54:11.922231 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7" (OuterVolumeSpecName: "kube-api-access-sg2n7") pod "67520fce-47d6-408f-9571-04527e50ad22" (UID: "67520fce-47d6-408f-9571-04527e50ad22"). InnerVolumeSpecName "kube-api-access-sg2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.020423 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg2n7\" (UniqueName: \"kubernetes.io/projected/67520fce-47d6-408f-9571-04527e50ad22-kube-api-access-sg2n7\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.020799 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.042453 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67520fce-47d6-408f-9571-04527e50ad22" (UID: "67520fce-47d6-408f-9571-04527e50ad22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.122105 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67520fce-47d6-408f-9571-04527e50ad22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.295955 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:12 crc kubenswrapper[4765]: W1203 20:54:12.311756 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d753880_d043_45ad_a3a8_80335fbce1c1.slice/crio-1ea41531f02578882f95a5935067cdac6feadb069fe97bfd2668daa8919b705f WatchSource:0}: Error finding container 1ea41531f02578882f95a5935067cdac6feadb069fe97bfd2668daa8919b705f: Status 404 returned error can't find the container with id 1ea41531f02578882f95a5935067cdac6feadb069fe97bfd2668daa8919b705f Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.329645 4765 generic.go:334] "Generic (PLEG): container finished" podID="c9120997-5314-4a15-87c9-1315f6adbef3" containerID="8b13c33535ca3eee8601014d14da978b020799aa49fb4e4a3c7d7ee0b33133fb" exitCode=0 Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.329699 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3ba2-account-create-update-xx6xz" event={"ID":"c9120997-5314-4a15-87c9-1315f6adbef3","Type":"ContainerDied","Data":"8b13c33535ca3eee8601014d14da978b020799aa49fb4e4a3c7d7ee0b33133fb"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.332503 4765 generic.go:334] "Generic (PLEG): container finished" podID="ba06d7a8-a247-4572-ae04-7e29248e3878" containerID="d97eec23e76d8f99d1f99220da1919b035a9a0623f3bde643a80187642f9a021" exitCode=0 Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.332562 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2f8-account-create-update-qxcsc" event={"ID":"ba06d7a8-a247-4572-ae04-7e29248e3878","Type":"ContainerDied","Data":"d97eec23e76d8f99d1f99220da1919b035a9a0623f3bde643a80187642f9a021"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.354545 4765 generic.go:334] "Generic (PLEG): container finished" podID="67520fce-47d6-408f-9571-04527e50ad22" containerID="00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838" exitCode=0 Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.355674 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5fw7" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.357536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerDied","Data":"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.357639 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5fw7" event={"ID":"67520fce-47d6-408f-9571-04527e50ad22","Type":"ContainerDied","Data":"09031af52bedb2a41ee2f91c26541ee45a64b9a2e12b798a2cdbf535da2c4837"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.357688 4765 scope.go:117] "RemoveContainer" containerID="00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.398842 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerStarted","Data":"852bb3526c6fc0222bd0203364edf309f1c75bcd1e6197e97e181dc3cef090b3"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.401003 4765 generic.go:334] "Generic (PLEG): container finished" podID="9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" containerID="c97172b65dd25f1b607a42915a770d24ebced514ad8d6dd66db892355f9a7c40" exitCode=0 Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.401289 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9506-account-create-update-95fmt" event={"ID":"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e","Type":"ContainerDied","Data":"c97172b65dd25f1b607a42915a770d24ebced514ad8d6dd66db892355f9a7c40"} Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.434461 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rqsmp" podStartSLOduration=4.608972379 podStartE2EDuration="8.434442804s" podCreationTimestamp="2025-12-03 20:54:04 +0000 UTC" firstStartedPulling="2025-12-03 20:54:07.340700629 +0000 UTC m=+945.271245780" lastFinishedPulling="2025-12-03 20:54:11.166171054 +0000 UTC m=+949.096716205" observedRunningTime="2025-12-03 20:54:12.416666254 +0000 UTC m=+950.347211405" watchObservedRunningTime="2025-12-03 20:54:12.434442804 +0000 UTC m=+950.364987955" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.448559 4765 scope.go:117] "RemoveContainer" containerID="c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.495809 4765 scope.go:117] "RemoveContainer" containerID="dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.528937 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.530032 4765 scope.go:117] "RemoveContainer" containerID="00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838" Dec 03 20:54:12 crc kubenswrapper[4765]: E1203 20:54:12.531282 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838\": container with ID starting with 00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838 not found: ID does not exist" containerID="00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.531347 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838"} err="failed to get container status \"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838\": rpc error: code = NotFound desc = could not find container \"00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838\": container with ID starting with 00704db49c3b709e85b193912da1883db9f89860e219a1d553f53d5dae304838 not found: ID does not exist" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.531381 4765 scope.go:117] "RemoveContainer" containerID="c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7" Dec 03 20:54:12 crc kubenswrapper[4765]: E1203 20:54:12.531794 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7\": container with ID starting with c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7 not found: ID does not exist" containerID="c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.531816 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7"} err="failed to get container status \"c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7\": rpc error: code = NotFound desc = could not find container \"c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7\": container with ID starting with c1901426c61c7f638c25214197481ebbb87573232c1d3e691e25f52cc82a5ee7 not found: ID does not exist" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.531837 4765 scope.go:117] "RemoveContainer" containerID="dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6" Dec 03 20:54:12 crc kubenswrapper[4765]: E1203 20:54:12.532092 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6\": container with ID starting with dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6 not found: ID does not exist" containerID="dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.532120 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6"} err="failed to get container status \"dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6\": rpc error: code = NotFound desc = could not find container \"dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6\": container with ID starting with dce08d1fd360b510f375ed808624e6bcc07cb459629c0aebeddbcafb020be1e6 not found: ID does not exist" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.536869 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v5fw7"] Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.742454 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.836710 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts\") pod \"3823c130-196f-4c3b-9028-301443274ef4\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.837044 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9czs\" (UniqueName: \"kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs\") pod \"3823c130-196f-4c3b-9028-301443274ef4\" (UID: \"3823c130-196f-4c3b-9028-301443274ef4\") " Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.840256 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3823c130-196f-4c3b-9028-301443274ef4" (UID: "3823c130-196f-4c3b-9028-301443274ef4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.863485 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs" (OuterVolumeSpecName: "kube-api-access-x9czs") pod "3823c130-196f-4c3b-9028-301443274ef4" (UID: "3823c130-196f-4c3b-9028-301443274ef4"). InnerVolumeSpecName "kube-api-access-x9czs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.869426 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.939896 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl6d4\" (UniqueName: \"kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4\") pod \"fccef2c9-a838-4fbf-a2b7-275ba5803488\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.940074 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts\") pod \"fccef2c9-a838-4fbf-a2b7-275ba5803488\" (UID: \"fccef2c9-a838-4fbf-a2b7-275ba5803488\") " Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.940500 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3823c130-196f-4c3b-9028-301443274ef4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.940518 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9czs\" (UniqueName: \"kubernetes.io/projected/3823c130-196f-4c3b-9028-301443274ef4-kube-api-access-x9czs\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.940874 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fccef2c9-a838-4fbf-a2b7-275ba5803488" (UID: "fccef2c9-a838-4fbf-a2b7-275ba5803488"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.952700 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4" (OuterVolumeSpecName: "kube-api-access-rl6d4") pod "fccef2c9-a838-4fbf-a2b7-275ba5803488" (UID: "fccef2c9-a838-4fbf-a2b7-275ba5803488"). InnerVolumeSpecName "kube-api-access-rl6d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:12 crc kubenswrapper[4765]: I1203 20:54:12.960706 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.041667 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts\") pod \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.041746 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2cm4\" (UniqueName: \"kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4\") pod \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\" (UID: \"dffedbc1-9f3a-46a0-9888-bb249ecc9670\") " Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.042170 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dffedbc1-9f3a-46a0-9888-bb249ecc9670" (UID: "dffedbc1-9f3a-46a0-9888-bb249ecc9670"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.042626 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl6d4\" (UniqueName: \"kubernetes.io/projected/fccef2c9-a838-4fbf-a2b7-275ba5803488-kube-api-access-rl6d4\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.042660 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dffedbc1-9f3a-46a0-9888-bb249ecc9670-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.042670 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fccef2c9-a838-4fbf-a2b7-275ba5803488-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.044906 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4" (OuterVolumeSpecName: "kube-api-access-z2cm4") pod "dffedbc1-9f3a-46a0-9888-bb249ecc9670" (UID: "dffedbc1-9f3a-46a0-9888-bb249ecc9670"). InnerVolumeSpecName "kube-api-access-z2cm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.144713 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2cm4\" (UniqueName: \"kubernetes.io/projected/dffedbc1-9f3a-46a0-9888-bb249ecc9670-kube-api-access-z2cm4\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.413701 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z8wmz" event={"ID":"dffedbc1-9f3a-46a0-9888-bb249ecc9670","Type":"ContainerDied","Data":"44bae944a1981fc048d81ea6e16a9fab763e2cf253fd2b097c3fbc4bd10dbc03"} Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.414544 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44bae944a1981fc048d81ea6e16a9fab763e2cf253fd2b097c3fbc4bd10dbc03" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.413711 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z8wmz" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.415836 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8vsmb" event={"ID":"fccef2c9-a838-4fbf-a2b7-275ba5803488","Type":"ContainerDied","Data":"03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23"} Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.415888 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ba29190ebbc32d1671ea9128ef3a87b1222fea62db85cc79d77a5d57313e23" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.415968 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8vsmb" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.418337 4765 generic.go:334] "Generic (PLEG): container finished" podID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerID="3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda" exitCode=0 Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.418401 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerDied","Data":"3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda"} Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.418429 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerStarted","Data":"1ea41531f02578882f95a5935067cdac6feadb069fe97bfd2668daa8919b705f"} Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.421631 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7gs2x" event={"ID":"3823c130-196f-4c3b-9028-301443274ef4","Type":"ContainerDied","Data":"40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb"} Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.421666 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7gs2x" Dec 03 20:54:13 crc kubenswrapper[4765]: I1203 20:54:13.421685 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40660f63ff9402f49148e59db02f46f682692b9f38b00d201429dfc7e9636dfb" Dec 03 20:54:14 crc kubenswrapper[4765]: I1203 20:54:14.374427 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67520fce-47d6-408f-9571-04527e50ad22" path="/var/lib/kubelet/pods/67520fce-47d6-408f-9571-04527e50ad22/volumes" Dec 03 20:54:14 crc kubenswrapper[4765]: I1203 20:54:14.818564 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:14 crc kubenswrapper[4765]: I1203 20:54:14.936943 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:14 crc kubenswrapper[4765]: I1203 20:54:14.937331 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.001717 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.083986 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.180467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzp7\" (UniqueName: \"kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7\") pod \"c9120997-5314-4a15-87c9-1315f6adbef3\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.180577 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts\") pod \"c9120997-5314-4a15-87c9-1315f6adbef3\" (UID: \"c9120997-5314-4a15-87c9-1315f6adbef3\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.181175 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9120997-5314-4a15-87c9-1315f6adbef3" (UID: "c9120997-5314-4a15-87c9-1315f6adbef3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.188140 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7" (OuterVolumeSpecName: "kube-api-access-7zzp7") pod "c9120997-5314-4a15-87c9-1315f6adbef3" (UID: "c9120997-5314-4a15-87c9-1315f6adbef3"). InnerVolumeSpecName "kube-api-access-7zzp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.255463 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.282463 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzp7\" (UniqueName: \"kubernetes.io/projected/c9120997-5314-4a15-87c9-1315f6adbef3-kube-api-access-7zzp7\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.282497 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9120997-5314-4a15-87c9-1315f6adbef3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.289902 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.305324 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.342364 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.448221 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3ba2-account-create-update-xx6xz" event={"ID":"c9120997-5314-4a15-87c9-1315f6adbef3","Type":"ContainerDied","Data":"ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db"} Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.448589 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff285dad01fc97616f67614ae52bcb62af36f62b194f395e7a886d5866e155db" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.448229 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3ba2-account-create-update-xx6xz" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.449833 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e2f8-account-create-update-qxcsc" event={"ID":"ba06d7a8-a247-4572-ae04-7e29248e3878","Type":"ContainerDied","Data":"df2e59367f929e75a7fc57edc2b8b57dfddc2cbe162d977b6e890ca94b17b6b7"} Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.449861 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2e59367f929e75a7fc57edc2b8b57dfddc2cbe162d977b6e890ca94b17b6b7" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.449848 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e2f8-account-create-update-qxcsc" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.454801 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9506-account-create-update-95fmt" event={"ID":"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e","Type":"ContainerDied","Data":"19918549efa24f16e2969f94f1bed68c48b9398e3d4413dd37257fe99ba4f95a"} Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.454829 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19918549efa24f16e2969f94f1bed68c48b9398e3d4413dd37257fe99ba4f95a" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.454921 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9506-account-create-update-95fmt" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.455013 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="dnsmasq-dns" containerID="cri-o://233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87" gracePeriod=10 Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.484802 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6zz\" (UniqueName: \"kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz\") pod \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.484908 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts\") pod \"ba06d7a8-a247-4572-ae04-7e29248e3878\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.484954 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sxp2\" (UniqueName: \"kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2\") pod \"ba06d7a8-a247-4572-ae04-7e29248e3878\" (UID: \"ba06d7a8-a247-4572-ae04-7e29248e3878\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.484977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts\") pod \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\" (UID: \"9781efae-8a1d-4f26-ac4f-a6ca36af2d6e\") " Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.486679 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" (UID: "9781efae-8a1d-4f26-ac4f-a6ca36af2d6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.491154 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2" (OuterVolumeSpecName: "kube-api-access-2sxp2") pod "ba06d7a8-a247-4572-ae04-7e29248e3878" (UID: "ba06d7a8-a247-4572-ae04-7e29248e3878"). InnerVolumeSpecName "kube-api-access-2sxp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.493574 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz" (OuterVolumeSpecName: "kube-api-access-rb6zz") pod "9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" (UID: "9781efae-8a1d-4f26-ac4f-a6ca36af2d6e"). InnerVolumeSpecName "kube-api-access-rb6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.493871 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba06d7a8-a247-4572-ae04-7e29248e3878" (UID: "ba06d7a8-a247-4572-ae04-7e29248e3878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.587326 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.588064 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb6zz\" (UniqueName: \"kubernetes.io/projected/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e-kube-api-access-rb6zz\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.588108 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba06d7a8-a247-4572-ae04-7e29248e3878-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: I1203 20:54:15.588121 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sxp2\" (UniqueName: \"kubernetes.io/projected/ba06d7a8-a247-4572-ae04-7e29248e3878-kube-api-access-2sxp2\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:15 crc kubenswrapper[4765]: E1203 20:54:15.975575 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaca0f78_34b4_469b_b973_0ab96adfc5fe.slice/crio-conmon-233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87.scope\": RecentStats: unable to find data in memory cache]" Dec 03 20:54:16 crc kubenswrapper[4765]: I1203 20:54:16.464500 4765 generic.go:334] "Generic (PLEG): container finished" podID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerID="233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87" exitCode=0 Dec 03 20:54:16 crc kubenswrapper[4765]: I1203 20:54:16.464559 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" event={"ID":"eaca0f78-34b4-469b-b973-0ab96adfc5fe","Type":"ContainerDied","Data":"233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87"} Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.246095 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mllvt"] Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247000 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="extract-content" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247206 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="extract-content" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247233 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="extract-utilities" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247241 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="extract-utilities" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247341 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3823c130-196f-4c3b-9028-301443274ef4" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247352 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3823c130-196f-4c3b-9028-301443274ef4" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247363 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffedbc1-9f3a-46a0-9888-bb249ecc9670" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247370 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffedbc1-9f3a-46a0-9888-bb249ecc9670" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247378 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9120997-5314-4a15-87c9-1315f6adbef3" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247385 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9120997-5314-4a15-87c9-1315f6adbef3" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247438 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccef2c9-a838-4fbf-a2b7-275ba5803488" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247447 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccef2c9-a838-4fbf-a2b7-275ba5803488" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247464 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247472 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247482 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="registry-server" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247489 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="registry-server" Dec 03 20:54:18 crc kubenswrapper[4765]: E1203 20:54:18.247500 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba06d7a8-a247-4572-ae04-7e29248e3878" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247508 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba06d7a8-a247-4572-ae04-7e29248e3878" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247719 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="67520fce-47d6-408f-9571-04527e50ad22" containerName="registry-server" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247763 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccef2c9-a838-4fbf-a2b7-275ba5803488" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247778 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9120997-5314-4a15-87c9-1315f6adbef3" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247797 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247811 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba06d7a8-a247-4572-ae04-7e29248e3878" containerName="mariadb-account-create-update" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247831 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffedbc1-9f3a-46a0-9888-bb249ecc9670" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.247854 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3823c130-196f-4c3b-9028-301443274ef4" containerName="mariadb-database-create" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.248483 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.250880 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.251050 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-srrtt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.268985 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mllvt"] Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.338625 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.435387 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.435479 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwtx\" (UniqueName: \"kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.435507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.435582 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.489651 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" event={"ID":"eaca0f78-34b4-469b-b973-0ab96adfc5fe","Type":"ContainerDied","Data":"1e2ccd18d199d5232b79f4c1c6c9ef646928da4239277d54c20c08053f70d958"} Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.489818 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kgg7k" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.489956 4765 scope.go:117] "RemoveContainer" containerID="233d6196ca7166d7e7e480a16415b854fe8b40b50dc0177ebb685b68a9fcce87" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.508434 4765 scope.go:117] "RemoveContainer" containerID="926a0eb69d8c21a3fb395ea78e4fe1c4dd471f42f26d1eed07be652dd5562a58" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536402 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc\") pod \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536435 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc65m\" (UniqueName: \"kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m\") pod \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536526 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb\") pod \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536558 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config\") pod \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\" (UID: \"eaca0f78-34b4-469b-b973-0ab96adfc5fe\") " Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536810 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwtx\" (UniqueName: \"kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536845 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536906 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.536950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.544632 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m" (OuterVolumeSpecName: "kube-api-access-vc65m") pod "eaca0f78-34b4-469b-b973-0ab96adfc5fe" (UID: "eaca0f78-34b4-469b-b973-0ab96adfc5fe"). InnerVolumeSpecName "kube-api-access-vc65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.544834 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.545760 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.547790 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.564267 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwtx\" (UniqueName: \"kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx\") pod \"glance-db-sync-mllvt\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.594193 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaca0f78-34b4-469b-b973-0ab96adfc5fe" (UID: "eaca0f78-34b4-469b-b973-0ab96adfc5fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.601844 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config" (OuterVolumeSpecName: "config") pod "eaca0f78-34b4-469b-b973-0ab96adfc5fe" (UID: "eaca0f78-34b4-469b-b973-0ab96adfc5fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.624118 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eaca0f78-34b4-469b-b973-0ab96adfc5fe" (UID: "eaca0f78-34b4-469b-b973-0ab96adfc5fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.636205 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.638559 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.638582 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc65m\" (UniqueName: \"kubernetes.io/projected/eaca0f78-34b4-469b-b973-0ab96adfc5fe-kube-api-access-vc65m\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.638593 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.638601 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaca0f78-34b4-469b-b973-0ab96adfc5fe-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.875582 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:18 crc kubenswrapper[4765]: I1203 20:54:18.886332 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kgg7k"] Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.306868 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mllvt"] Dec 03 20:54:19 crc kubenswrapper[4765]: W1203 20:54:19.310205 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80088b6b_01d5_403b_b051_fd7defbee240.slice/crio-b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc WatchSource:0}: Error finding container b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc: Status 404 returned error can't find the container with id b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.497945 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mllvt" event={"ID":"80088b6b-01d5-403b-b051-fd7defbee240","Type":"ContainerStarted","Data":"b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc"} Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.499782 4765 generic.go:334] "Generic (PLEG): container finished" podID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerID="344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973" exitCode=0 Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.499823 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerDied","Data":"344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973"} Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.502888 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerID="61c07efe7283a35939fa86ec93228bad0cd86cad45e92d2b77fb644dd2ded0af" exitCode=0 Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.502935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerDied","Data":"61c07efe7283a35939fa86ec93228bad0cd86cad45e92d2b77fb644dd2ded0af"} Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.508234 4765 generic.go:334] "Generic (PLEG): container finished" podID="33fa4225-5981-4b62-ac67-674896fbc047" containerID="0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb" exitCode=0 Dec 03 20:54:19 crc kubenswrapper[4765]: I1203 20:54:19.508273 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerDied","Data":"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb"} Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.236113 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.374724 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" path="/var/lib/kubelet/pods/eaca0f78-34b4-469b-b973-0ab96adfc5fe/volumes" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.516851 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerStarted","Data":"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d"} Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.520927 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerStarted","Data":"d0b976c3a48dfbe3f96de0a41b56dd2a9d4b600dd0c200a778ad93cda95cb6ea"} Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.521135 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.524822 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerStarted","Data":"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8"} Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.525017 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.542469 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wgdzp" podStartSLOduration=2.7649020159999997 podStartE2EDuration="9.542452877s" podCreationTimestamp="2025-12-03 20:54:11 +0000 UTC" firstStartedPulling="2025-12-03 20:54:13.420036733 +0000 UTC m=+951.350581884" lastFinishedPulling="2025-12-03 20:54:20.197587584 +0000 UTC m=+958.128132745" observedRunningTime="2025-12-03 20:54:20.540100344 +0000 UTC m=+958.470645515" watchObservedRunningTime="2025-12-03 20:54:20.542452877 +0000 UTC m=+958.472998028" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.570434 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.708023567 podStartE2EDuration="57.57041212s" podCreationTimestamp="2025-12-03 20:53:23 +0000 UTC" firstStartedPulling="2025-12-03 20:53:25.473623825 +0000 UTC m=+903.404168976" lastFinishedPulling="2025-12-03 20:53:45.336012368 +0000 UTC m=+923.266557529" observedRunningTime="2025-12-03 20:54:20.56705339 +0000 UTC m=+958.497598551" watchObservedRunningTime="2025-12-03 20:54:20.57041212 +0000 UTC m=+958.500957281" Dec 03 20:54:20 crc kubenswrapper[4765]: I1203 20:54:20.592891 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.296648751 podStartE2EDuration="57.592869314s" podCreationTimestamp="2025-12-03 20:53:23 +0000 UTC" firstStartedPulling="2025-12-03 20:53:25.429447762 +0000 UTC m=+903.359992913" lastFinishedPulling="2025-12-03 20:53:41.725668325 +0000 UTC m=+919.656213476" observedRunningTime="2025-12-03 20:54:20.585748792 +0000 UTC m=+958.516293973" watchObservedRunningTime="2025-12-03 20:54:20.592869314 +0000 UTC m=+958.523414465" Dec 03 20:54:21 crc kubenswrapper[4765]: I1203 20:54:21.827797 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:21 crc kubenswrapper[4765]: I1203 20:54:21.827850 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:22 crc kubenswrapper[4765]: I1203 20:54:22.892034 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wgdzp" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="registry-server" probeResult="failure" output=< Dec 03 20:54:22 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 20:54:22 crc kubenswrapper[4765]: > Dec 03 20:54:24 crc kubenswrapper[4765]: I1203 20:54:24.988788 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:25 crc kubenswrapper[4765]: I1203 20:54:25.038307 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:25 crc kubenswrapper[4765]: I1203 20:54:25.569706 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rqsmp" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="registry-server" containerID="cri-o://852bb3526c6fc0222bd0203364edf309f1c75bcd1e6197e97e181dc3cef090b3" gracePeriod=2 Dec 03 20:54:26 crc kubenswrapper[4765]: I1203 20:54:26.580417 4765 generic.go:334] "Generic (PLEG): container finished" podID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerID="852bb3526c6fc0222bd0203364edf309f1c75bcd1e6197e97e181dc3cef090b3" exitCode=0 Dec 03 20:54:26 crc kubenswrapper[4765]: I1203 20:54:26.580498 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerDied","Data":"852bb3526c6fc0222bd0203364edf309f1c75bcd1e6197e97e181dc3cef090b3"} Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.307734 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-f85pk" podUID="2a9aeba1-759a-41ad-a871-5cfa33de5aae" containerName="ovn-controller" probeResult="failure" output=< Dec 03 20:54:29 crc kubenswrapper[4765]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 20:54:29 crc kubenswrapper[4765]: > Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.331455 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.333991 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wbnps" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.567350 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f85pk-config-7xzg7"] Dec 03 20:54:29 crc kubenswrapper[4765]: E1203 20:54:29.567768 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="init" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.567786 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="init" Dec 03 20:54:29 crc kubenswrapper[4765]: E1203 20:54:29.567829 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="dnsmasq-dns" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.567839 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="dnsmasq-dns" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.568041 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaca0f78-34b4-469b-b973-0ab96adfc5fe" containerName="dnsmasq-dns" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.568756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.570867 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.582981 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk-config-7xzg7"] Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.632725 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.632782 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.633205 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.633257 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.633400 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.633461 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbc2b\" (UniqueName: \"kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.734765 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.734821 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.734851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.734869 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbc2b\" (UniqueName: \"kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.734957 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.735043 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.735091 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.735116 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.735146 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.735802 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.736889 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.757712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbc2b\" (UniqueName: \"kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b\") pod \"ovn-controller-f85pk-config-7xzg7\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:29 crc kubenswrapper[4765]: I1203 20:54:29.885419 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.841072 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.843350 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.854219 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.903756 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.954682 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.976253 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.976317 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ccw\" (UniqueName: \"kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:31 crc kubenswrapper[4765]: I1203 20:54:31.976388 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.078105 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.078155 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ccw\" (UniqueName: \"kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.078215 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.078792 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.078937 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.102444 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ccw\" (UniqueName: \"kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw\") pod \"community-operators-q8k76\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:32 crc kubenswrapper[4765]: I1203 20:54:32.179756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.748046 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.806066 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxh9\" (UniqueName: \"kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9\") pod \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.806175 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content\") pod \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.806331 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities\") pod \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\" (UID: \"10d43e32-c91c-4a30-ba5a-de0c3d1b0800\") " Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.807551 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities" (OuterVolumeSpecName: "utilities") pod "10d43e32-c91c-4a30-ba5a-de0c3d1b0800" (UID: "10d43e32-c91c-4a30-ba5a-de0c3d1b0800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.810780 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9" (OuterVolumeSpecName: "kube-api-access-fcxh9") pod "10d43e32-c91c-4a30-ba5a-de0c3d1b0800" (UID: "10d43e32-c91c-4a30-ba5a-de0c3d1b0800"). InnerVolumeSpecName "kube-api-access-fcxh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.828878 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10d43e32-c91c-4a30-ba5a-de0c3d1b0800" (UID: "10d43e32-c91c-4a30-ba5a-de0c3d1b0800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.908489 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.908540 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxh9\" (UniqueName: \"kubernetes.io/projected/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-kube-api-access-fcxh9\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.908557 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d43e32-c91c-4a30-ba5a-de0c3d1b0800-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.990594 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:33 crc kubenswrapper[4765]: W1203 20:54:33.991977 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4afb3b5_df7d_47c7_b743_6cf40f7dcb03.slice/crio-e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2 WatchSource:0}: Error finding container e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2: Status 404 returned error can't find the container with id e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2 Dec 03 20:54:33 crc kubenswrapper[4765]: W1203 20:54:33.994512 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4096f6f7_20e2_4ee1_8cbf_c6473ba8ab6a.slice/crio-0d8a82624570b63f6153869b9f4ff4cd62218dfc7ef0ec70068d2af32ca53abf WatchSource:0}: Error finding container 0d8a82624570b63f6153869b9f4ff4cd62218dfc7ef0ec70068d2af32ca53abf: Status 404 returned error can't find the container with id 0d8a82624570b63f6153869b9f4ff4cd62218dfc7ef0ec70068d2af32ca53abf Dec 03 20:54:33 crc kubenswrapper[4765]: I1203 20:54:33.998766 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk-config-7xzg7"] Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.195669 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.196236 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wgdzp" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="registry-server" containerID="cri-o://39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d" gracePeriod=2 Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.326708 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-f85pk" podUID="2a9aeba1-759a-41ad-a871-5cfa33de5aae" containerName="ovn-controller" probeResult="failure" output=< Dec 03 20:54:34 crc kubenswrapper[4765]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 20:54:34 crc kubenswrapper[4765]: > Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.628851 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.663380 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mllvt" event={"ID":"80088b6b-01d5-403b-b051-fd7defbee240","Type":"ContainerStarted","Data":"462f835b9cddd802be5c5430dda21dc715fe276aa6fbf56cce397ee722285538"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.665534 4765 generic.go:334] "Generic (PLEG): container finished" podID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerID="98b57177d49b1bc6d153f7cba0c4175b43159fd937fde38882db80c11e6bfe07" exitCode=0 Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.665597 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerDied","Data":"98b57177d49b1bc6d153f7cba0c4175b43159fd937fde38882db80c11e6bfe07"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.665621 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerStarted","Data":"0d8a82624570b63f6153869b9f4ff4cd62218dfc7ef0ec70068d2af32ca53abf"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.670219 4765 generic.go:334] "Generic (PLEG): container finished" podID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerID="39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d" exitCode=0 Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.670329 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgdzp" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.670675 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerDied","Data":"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.670701 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgdzp" event={"ID":"0d753880-d043-45ad-a3a8-80335fbce1c1","Type":"ContainerDied","Data":"1ea41531f02578882f95a5935067cdac6feadb069fe97bfd2668daa8919b705f"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.670716 4765 scope.go:117] "RemoveContainer" containerID="39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.681864 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqsmp" event={"ID":"10d43e32-c91c-4a30-ba5a-de0c3d1b0800","Type":"ContainerDied","Data":"cb456d73e70001f01597fd9b194fedb2baa9fef76277070ecdac77578ed212c1"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.681923 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqsmp" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.684484 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-7xzg7" event={"ID":"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03","Type":"ContainerStarted","Data":"fff205ef81863b7053c6630f280650dfa7adf09912225c268437151f120b3aaf"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.684533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-7xzg7" event={"ID":"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03","Type":"ContainerStarted","Data":"e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2"} Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.688155 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mllvt" podStartSLOduration=2.4352616 podStartE2EDuration="16.688142325s" podCreationTimestamp="2025-12-03 20:54:18 +0000 UTC" firstStartedPulling="2025-12-03 20:54:19.312770756 +0000 UTC m=+957.243315897" lastFinishedPulling="2025-12-03 20:54:33.565651471 +0000 UTC m=+971.496196622" observedRunningTime="2025-12-03 20:54:34.678229019 +0000 UTC m=+972.608774170" watchObservedRunningTime="2025-12-03 20:54:34.688142325 +0000 UTC m=+972.618687476" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.693132 4765 scope.go:117] "RemoveContainer" containerID="344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.713442 4765 scope.go:117] "RemoveContainer" containerID="3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.722337 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content\") pod \"0d753880-d043-45ad-a3a8-80335fbce1c1\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.722592 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9vm9\" (UniqueName: \"kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9\") pod \"0d753880-d043-45ad-a3a8-80335fbce1c1\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.722673 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities\") pod \"0d753880-d043-45ad-a3a8-80335fbce1c1\" (UID: \"0d753880-d043-45ad-a3a8-80335fbce1c1\") " Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.726071 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities" (OuterVolumeSpecName: "utilities") pod "0d753880-d043-45ad-a3a8-80335fbce1c1" (UID: "0d753880-d043-45ad-a3a8-80335fbce1c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.732732 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9" (OuterVolumeSpecName: "kube-api-access-b9vm9") pod "0d753880-d043-45ad-a3a8-80335fbce1c1" (UID: "0d753880-d043-45ad-a3a8-80335fbce1c1"). InnerVolumeSpecName "kube-api-access-b9vm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.737074 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-f85pk-config-7xzg7" podStartSLOduration=5.737056433 podStartE2EDuration="5.737056433s" podCreationTimestamp="2025-12-03 20:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:34.71949527 +0000 UTC m=+972.650040421" watchObservedRunningTime="2025-12-03 20:54:34.737056433 +0000 UTC m=+972.667601584" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.744924 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.752242 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqsmp"] Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.779477 4765 scope.go:117] "RemoveContainer" containerID="39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d" Dec 03 20:54:34 crc kubenswrapper[4765]: E1203 20:54:34.780540 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d\": container with ID starting with 39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d not found: ID does not exist" containerID="39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.780570 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d"} err="failed to get container status \"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d\": rpc error: code = NotFound desc = could not find container \"39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d\": container with ID starting with 39cb310915ae5197561b634c953d83ff25736cfc5e74c06fcace182ddd81a59d not found: ID does not exist" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.780591 4765 scope.go:117] "RemoveContainer" containerID="344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973" Dec 03 20:54:34 crc kubenswrapper[4765]: E1203 20:54:34.780834 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973\": container with ID starting with 344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973 not found: ID does not exist" containerID="344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.780853 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973"} err="failed to get container status \"344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973\": rpc error: code = NotFound desc = could not find container \"344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973\": container with ID starting with 344235d5ea38e45b9fc03c58283e54774e31d3087e4b33627bdc0051f4624973 not found: ID does not exist" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.780866 4765 scope.go:117] "RemoveContainer" containerID="3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda" Dec 03 20:54:34 crc kubenswrapper[4765]: E1203 20:54:34.781060 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda\": container with ID starting with 3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda not found: ID does not exist" containerID="3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.781075 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda"} err="failed to get container status \"3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda\": rpc error: code = NotFound desc = could not find container \"3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda\": container with ID starting with 3e7a454f2a5cb3c2ea9a5197d4c57dd0bd658e80bdba889120ddc97bd3201cda not found: ID does not exist" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.781086 4765 scope.go:117] "RemoveContainer" containerID="852bb3526c6fc0222bd0203364edf309f1c75bcd1e6197e97e181dc3cef090b3" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.784500 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d753880-d043-45ad-a3a8-80335fbce1c1" (UID: "0d753880-d043-45ad-a3a8-80335fbce1c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.824370 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9vm9\" (UniqueName: \"kubernetes.io/projected/0d753880-d043-45ad-a3a8-80335fbce1c1-kube-api-access-b9vm9\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.824435 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.824447 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d753880-d043-45ad-a3a8-80335fbce1c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.835000 4765 scope.go:117] "RemoveContainer" containerID="c6201405caa4c6d9e9143d40e0628d2250b0444f76f623392a2d92e8d55172f7" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.850083 4765 scope.go:117] "RemoveContainer" containerID="1e798ff07b02f21ef14f996dadc464a3e5b7ee8b826d134816c4ac542a018c17" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.933788 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 20:54:34 crc kubenswrapper[4765]: I1203 20:54:34.950735 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.058874 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.067709 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wgdzp"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296074 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7r84p"] Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296415 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="extract-content" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296432 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="extract-content" Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296447 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="extract-utilities" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296452 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="extract-utilities" Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296464 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="extract-utilities" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296470 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="extract-utilities" Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296484 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296490 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296503 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="extract-content" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296510 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="extract-content" Dec 03 20:54:35 crc kubenswrapper[4765]: E1203 20:54:35.296521 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296527 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296675 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.296689 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" containerName="registry-server" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.297173 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.332153 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml7d6\" (UniqueName: \"kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.332250 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.347577 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r84p"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.409683 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-f2xv7"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.410961 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.417786 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f2xv7"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.443089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.443210 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml7d6\" (UniqueName: \"kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.443920 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.513672 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml7d6\" (UniqueName: \"kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6\") pod \"cinder-db-create-7r84p\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.547095 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fb6k\" (UniqueName: \"kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.547172 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.634898 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7c91-account-create-update-7cstw"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.636679 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.643171 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.647676 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7c91-account-create-update-7cstw"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.655643 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.685040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fb6k\" (UniqueName: \"kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.685151 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.686234 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.701400 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-49dvf"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.702490 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.707648 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.708352 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.709028 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bs5mp" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.763499 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fb6k\" (UniqueName: \"kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k\") pod \"barbican-db-create-f2xv7\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.765002 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.771062 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerStarted","Data":"0df15d55e6d4dd35221f416841152f695799862f1f10b036c02a47066ad61a78"} Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.786282 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.821212 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztsl\" (UniqueName: \"kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.821412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.821585 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.821604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.821640 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjwt\" (UniqueName: \"kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.828754 4765 generic.go:334] "Generic (PLEG): container finished" podID="b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" containerID="fff205ef81863b7053c6630f280650dfa7adf09912225c268437151f120b3aaf" exitCode=0 Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.829429 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-7xzg7" event={"ID":"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03","Type":"ContainerDied","Data":"fff205ef81863b7053c6630f280650dfa7adf09912225c268437151f120b3aaf"} Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.832435 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49dvf"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.845413 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-q9s77"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.847019 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.859342 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7dec-account-create-update-zdgw6"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.860460 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.865847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.881466 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7dec-account-create-update-zdgw6"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.884199 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q9s77"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.907404 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dd63-account-create-update-rkkhn"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.908650 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.913706 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.915399 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd63-account-create-update-rkkhn"] Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.923315 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztsl\" (UniqueName: \"kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.923420 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.923541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.923564 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.923595 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjwt\" (UniqueName: \"kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.924952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.929785 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.935091 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.947654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztsl\" (UniqueName: \"kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl\") pod \"keystone-db-sync-49dvf\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.957027 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjwt\" (UniqueName: \"kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt\") pod \"cinder-7c91-account-create-update-7cstw\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:35 crc kubenswrapper[4765]: I1203 20:54:35.973061 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024751 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt2f\" (UniqueName: \"kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024795 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024893 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z89q\" (UniqueName: \"kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024911 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024951 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.024998 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jk7b\" (UniqueName: \"kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.080889 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.128388 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.128807 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z89q\" (UniqueName: \"kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.128851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.128894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.128945 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jk7b\" (UniqueName: \"kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.129004 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qt2f\" (UniqueName: \"kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.130062 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.130727 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.131211 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.165665 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jk7b\" (UniqueName: \"kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b\") pod \"barbican-7dec-account-create-update-zdgw6\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.165792 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z89q\" (UniqueName: \"kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q\") pod \"neutron-db-create-q9s77\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.171932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qt2f\" (UniqueName: \"kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f\") pod \"neutron-dd63-account-create-update-rkkhn\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.179611 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.188081 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.242049 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.352987 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f2xv7"] Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.376887 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d753880-d043-45ad-a3a8-80335fbce1c1" path="/var/lib/kubelet/pods/0d753880-d043-45ad-a3a8-80335fbce1c1/volumes" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.377720 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d43e32-c91c-4a30-ba5a-de0c3d1b0800" path="/var/lib/kubelet/pods/10d43e32-c91c-4a30-ba5a-de0c3d1b0800/volumes" Dec 03 20:54:36 crc kubenswrapper[4765]: W1203 20:54:36.377735 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e37f2d0_b9ea_4e9a_8259_0be89e132c64.slice/crio-90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4 WatchSource:0}: Error finding container 90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4: Status 404 returned error can't find the container with id 90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4 Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.386623 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r84p"] Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.580886 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7c91-account-create-update-7cstw"] Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.619733 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7dec-account-create-update-zdgw6"] Dec 03 20:54:36 crc kubenswrapper[4765]: W1203 20:54:36.620752 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ab27ead_a53a_41ff_8f57_7fd8e0da7e21.slice/crio-39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a WatchSource:0}: Error finding container 39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a: Status 404 returned error can't find the container with id 39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.738561 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49dvf"] Dec 03 20:54:36 crc kubenswrapper[4765]: W1203 20:54:36.748425 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5b83af8_4f5b_405c_9961_3f37c37ee18b.slice/crio-948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d WatchSource:0}: Error finding container 948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d: Status 404 returned error can't find the container with id 948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.839545 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f2xv7" event={"ID":"1e37f2d0-b9ea-4e9a-8259-0be89e132c64","Type":"ContainerStarted","Data":"f16d0ceeabe8a5de605482a02b50f040d50487ad9861e9d0da2e7f8aedb8158b"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.839585 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f2xv7" event={"ID":"1e37f2d0-b9ea-4e9a-8259-0be89e132c64","Type":"ContainerStarted","Data":"90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.842623 4765 generic.go:334] "Generic (PLEG): container finished" podID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerID="0df15d55e6d4dd35221f416841152f695799862f1f10b036c02a47066ad61a78" exitCode=0 Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.842715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerDied","Data":"0df15d55e6d4dd35221f416841152f695799862f1f10b036c02a47066ad61a78"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.844755 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7c91-account-create-update-7cstw" event={"ID":"1959025f-e3a0-42a9-b4f7-c55151f34a91","Type":"ContainerStarted","Data":"91bcf5930a795ba393fe362cbe1e05497a98bac6f2a4867cfef5331bf45acf96"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.845725 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7dec-account-create-update-zdgw6" event={"ID":"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21","Type":"ContainerStarted","Data":"39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.847408 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r84p" event={"ID":"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675","Type":"ContainerStarted","Data":"5df27bc840b213b034ef25622938ea039a34314b244aaed852501387c7208792"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.847432 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r84p" event={"ID":"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675","Type":"ContainerStarted","Data":"19d5e0131fd0fc233312c1c497f5d7cccbbaec476d2f3364cc5549b15fe929cf"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.848714 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49dvf" event={"ID":"b5b83af8-4f5b-405c-9961-3f37c37ee18b","Type":"ContainerStarted","Data":"948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d"} Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.856112 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-f2xv7" podStartSLOduration=1.856098873 podStartE2EDuration="1.856098873s" podCreationTimestamp="2025-12-03 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:36.855100637 +0000 UTC m=+974.785645788" watchObservedRunningTime="2025-12-03 20:54:36.856098873 +0000 UTC m=+974.786644024" Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.873755 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd63-account-create-update-rkkhn"] Dec 03 20:54:36 crc kubenswrapper[4765]: W1203 20:54:36.877482 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42e811ab_603b_493f_8df9_9e00de2b9cef.slice/crio-d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01 WatchSource:0}: Error finding container d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01: Status 404 returned error can't find the container with id d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01 Dec 03 20:54:36 crc kubenswrapper[4765]: I1203 20:54:36.947778 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q9s77"] Dec 03 20:54:36 crc kubenswrapper[4765]: W1203 20:54:36.961459 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc0c742d_3c3d_4020_89a8_53453beeab23.slice/crio-8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea WatchSource:0}: Error finding container 8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea: Status 404 returned error can't find the container with id 8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.176053 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.348901 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349354 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbc2b\" (UniqueName: \"kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349090 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349550 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349599 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349658 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run" (OuterVolumeSpecName: "var-run") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349673 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349716 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.349779 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts\") pod \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\" (UID: \"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03\") " Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350365 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350546 4765 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350575 4765 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350588 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts" (OuterVolumeSpecName: "scripts") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350593 4765 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.350635 4765 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.356457 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b" (OuterVolumeSpecName: "kube-api-access-xbc2b") pod "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" (UID: "b4afb3b5-df7d-47c7-b743-6cf40f7dcb03"). InnerVolumeSpecName "kube-api-access-xbc2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.452529 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.452569 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbc2b\" (UniqueName: \"kubernetes.io/projected/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03-kube-api-access-xbc2b\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.861913 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-7xzg7" event={"ID":"b4afb3b5-df7d-47c7-b743-6cf40f7dcb03","Type":"ContainerDied","Data":"e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2"} Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.861954 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ca5a299486a496e93ec4487e76cba638134b9d7626ca3300c94c3423fecbc2" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.862010 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-7xzg7" Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.865348 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q9s77" event={"ID":"dc0c742d-3c3d-4020-89a8-53453beeab23","Type":"ContainerStarted","Data":"8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea"} Dec 03 20:54:37 crc kubenswrapper[4765]: I1203 20:54:37.866928 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd63-account-create-update-rkkhn" event={"ID":"42e811ab-603b-493f-8df9-9e00de2b9cef","Type":"ContainerStarted","Data":"d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01"} Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.319186 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-f85pk-config-7xzg7"] Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.325959 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-f85pk-config-7xzg7"] Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.368842 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" path="/var/lib/kubelet/pods/b4afb3b5-df7d-47c7-b743-6cf40f7dcb03/volumes" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.481867 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f85pk-config-5dfwg"] Dec 03 20:54:38 crc kubenswrapper[4765]: E1203 20:54:38.482322 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" containerName="ovn-config" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.482346 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" containerName="ovn-config" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.482560 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4afb3b5-df7d-47c7-b743-6cf40f7dcb03" containerName="ovn-config" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.483283 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.486049 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.495806 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk-config-5dfwg"] Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.682468 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.683121 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.683332 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn2g7\" (UniqueName: \"kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.683507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.683552 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.683727 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785082 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785425 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785610 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785739 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785853 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785992 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn2g7\" (UniqueName: \"kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.785480 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.786653 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.786712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.787377 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.787618 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:38 crc kubenswrapper[4765]: I1203 20:54:38.805334 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn2g7\" (UniqueName: \"kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7\") pod \"ovn-controller-f85pk-config-5dfwg\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.102213 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.349133 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-f85pk" Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.605357 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f85pk-config-5dfwg"] Dec 03 20:54:39 crc kubenswrapper[4765]: W1203 20:54:39.623911 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8a888c5_e13a_49df_b58c_452d81afc3be.slice/crio-11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf WatchSource:0}: Error finding container 11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf: Status 404 returned error can't find the container with id 11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.886674 4765 generic.go:334] "Generic (PLEG): container finished" podID="dc0c742d-3c3d-4020-89a8-53453beeab23" containerID="af9f702769e504f8ea84611b310d1cfb43aaec53772a19274b794eb495114010" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.887003 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q9s77" event={"ID":"dc0c742d-3c3d-4020-89a8-53453beeab23","Type":"ContainerDied","Data":"af9f702769e504f8ea84611b310d1cfb43aaec53772a19274b794eb495114010"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.890589 4765 generic.go:334] "Generic (PLEG): container finished" podID="42e811ab-603b-493f-8df9-9e00de2b9cef" containerID="611b9785371b44b85c560895feeb1f2c12fdf2c3acb831c8747dee772c66ab0b" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.890696 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd63-account-create-update-rkkhn" event={"ID":"42e811ab-603b-493f-8df9-9e00de2b9cef","Type":"ContainerDied","Data":"611b9785371b44b85c560895feeb1f2c12fdf2c3acb831c8747dee772c66ab0b"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.892497 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-5dfwg" event={"ID":"c8a888c5-e13a-49df-b58c-452d81afc3be","Type":"ContainerStarted","Data":"11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.894471 4765 generic.go:334] "Generic (PLEG): container finished" podID="1959025f-e3a0-42a9-b4f7-c55151f34a91" containerID="82294af1e4bc09118963d288c16142f5cd2c6b83426c89c7224b8452b2b0b156" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.894530 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7c91-account-create-update-7cstw" event={"ID":"1959025f-e3a0-42a9-b4f7-c55151f34a91","Type":"ContainerDied","Data":"82294af1e4bc09118963d288c16142f5cd2c6b83426c89c7224b8452b2b0b156"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.896574 4765 generic.go:334] "Generic (PLEG): container finished" podID="6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" containerID="0aeb255f24d371cab29cd89740393ce2ad83c35782c8451c1b9e57f144d0dbc6" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.896629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7dec-account-create-update-zdgw6" event={"ID":"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21","Type":"ContainerDied","Data":"0aeb255f24d371cab29cd89740393ce2ad83c35782c8451c1b9e57f144d0dbc6"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.906513 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" containerID="5df27bc840b213b034ef25622938ea039a34314b244aaed852501387c7208792" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.906572 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r84p" event={"ID":"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675","Type":"ContainerDied","Data":"5df27bc840b213b034ef25622938ea039a34314b244aaed852501387c7208792"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.910922 4765 generic.go:334] "Generic (PLEG): container finished" podID="1e37f2d0-b9ea-4e9a-8259-0be89e132c64" containerID="f16d0ceeabe8a5de605482a02b50f040d50487ad9861e9d0da2e7f8aedb8158b" exitCode=0 Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.910957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f2xv7" event={"ID":"1e37f2d0-b9ea-4e9a-8259-0be89e132c64","Type":"ContainerDied","Data":"f16d0ceeabe8a5de605482a02b50f040d50487ad9861e9d0da2e7f8aedb8158b"} Dec 03 20:54:39 crc kubenswrapper[4765]: I1203 20:54:39.914662 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerStarted","Data":"33e07c00ad8c3fd8ab803bec92002c24de6c0efb65a938a04ad5c06bb85e2238"} Dec 03 20:54:40 crc kubenswrapper[4765]: I1203 20:54:40.019422 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8k76" podStartSLOduration=4.592698146 podStartE2EDuration="9.019393455s" podCreationTimestamp="2025-12-03 20:54:31 +0000 UTC" firstStartedPulling="2025-12-03 20:54:34.666827072 +0000 UTC m=+972.597372223" lastFinishedPulling="2025-12-03 20:54:39.093522371 +0000 UTC m=+977.024067532" observedRunningTime="2025-12-03 20:54:40.011240965 +0000 UTC m=+977.941786116" watchObservedRunningTime="2025-12-03 20:54:40.019393455 +0000 UTC m=+977.949938596" Dec 03 20:54:40 crc kubenswrapper[4765]: I1203 20:54:40.944490 4765 generic.go:334] "Generic (PLEG): container finished" podID="c8a888c5-e13a-49df-b58c-452d81afc3be" containerID="07c250f6f51efa9e77a7e95e42c3f1560fcff00b5552568d710f0203ab56f2b8" exitCode=0 Dec 03 20:54:40 crc kubenswrapper[4765]: I1203 20:54:40.944582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-5dfwg" event={"ID":"c8a888c5-e13a-49df-b58c-452d81afc3be","Type":"ContainerDied","Data":"07c250f6f51efa9e77a7e95e42c3f1560fcff00b5552568d710f0203ab56f2b8"} Dec 03 20:54:42 crc kubenswrapper[4765]: I1203 20:54:42.179991 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:42 crc kubenswrapper[4765]: I1203 20:54:42.180393 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:42 crc kubenswrapper[4765]: I1203 20:54:42.245777 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.160540 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.193618 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.200108 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.285283 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.303678 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326060 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zjwt\" (UniqueName: \"kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt\") pod \"1959025f-e3a0-42a9-b4f7-c55151f34a91\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326187 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts\") pod \"42e811ab-603b-493f-8df9-9e00de2b9cef\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326216 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qt2f\" (UniqueName: \"kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f\") pod \"42e811ab-603b-493f-8df9-9e00de2b9cef\" (UID: \"42e811ab-603b-493f-8df9-9e00de2b9cef\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326241 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jk7b\" (UniqueName: \"kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b\") pod \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326282 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts\") pod \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\" (UID: \"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.326341 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts\") pod \"1959025f-e3a0-42a9-b4f7-c55151f34a91\" (UID: \"1959025f-e3a0-42a9-b4f7-c55151f34a91\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.328520 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1959025f-e3a0-42a9-b4f7-c55151f34a91" (UID: "1959025f-e3a0-42a9-b4f7-c55151f34a91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.337493 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.337508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42e811ab-603b-493f-8df9-9e00de2b9cef" (UID: "42e811ab-603b-493f-8df9-9e00de2b9cef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.340448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" (UID: "6ab27ead-a53a-41ff-8f57-7fd8e0da7e21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.350416 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.359959 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f" (OuterVolumeSpecName: "kube-api-access-2qt2f") pod "42e811ab-603b-493f-8df9-9e00de2b9cef" (UID: "42e811ab-603b-493f-8df9-9e00de2b9cef"). InnerVolumeSpecName "kube-api-access-2qt2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.365834 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt" (OuterVolumeSpecName: "kube-api-access-7zjwt") pod "1959025f-e3a0-42a9-b4f7-c55151f34a91" (UID: "1959025f-e3a0-42a9-b4f7-c55151f34a91"). InnerVolumeSpecName "kube-api-access-7zjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.365881 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b" (OuterVolumeSpecName: "kube-api-access-8jk7b") pod "6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" (UID: "6ab27ead-a53a-41ff-8f57-7fd8e0da7e21"). InnerVolumeSpecName "kube-api-access-8jk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.427735 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts\") pod \"dc0c742d-3c3d-4020-89a8-53453beeab23\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.427861 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z89q\" (UniqueName: \"kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q\") pod \"dc0c742d-3c3d-4020-89a8-53453beeab23\" (UID: \"dc0c742d-3c3d-4020-89a8-53453beeab23\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.427896 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts\") pod \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428006 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fb6k\" (UniqueName: \"kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k\") pod \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\" (UID: \"1e37f2d0-b9ea-4e9a-8259-0be89e132c64\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428615 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e811ab-603b-493f-8df9-9e00de2b9cef-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428641 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qt2f\" (UniqueName: \"kubernetes.io/projected/42e811ab-603b-493f-8df9-9e00de2b9cef-kube-api-access-2qt2f\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428657 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jk7b\" (UniqueName: \"kubernetes.io/projected/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-kube-api-access-8jk7b\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428669 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428681 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1959025f-e3a0-42a9-b4f7-c55151f34a91-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428680 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e37f2d0-b9ea-4e9a-8259-0be89e132c64" (UID: "1e37f2d0-b9ea-4e9a-8259-0be89e132c64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428692 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zjwt\" (UniqueName: \"kubernetes.io/projected/1959025f-e3a0-42a9-b4f7-c55151f34a91-kube-api-access-7zjwt\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.428705 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc0c742d-3c3d-4020-89a8-53453beeab23" (UID: "dc0c742d-3c3d-4020-89a8-53453beeab23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.432234 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q" (OuterVolumeSpecName: "kube-api-access-6z89q") pod "dc0c742d-3c3d-4020-89a8-53453beeab23" (UID: "dc0c742d-3c3d-4020-89a8-53453beeab23"). InnerVolumeSpecName "kube-api-access-6z89q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.433714 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k" (OuterVolumeSpecName: "kube-api-access-9fb6k") pod "1e37f2d0-b9ea-4e9a-8259-0be89e132c64" (UID: "1e37f2d0-b9ea-4e9a-8259-0be89e132c64"). InnerVolumeSpecName "kube-api-access-9fb6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530023 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn2g7\" (UniqueName: \"kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530075 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530194 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530229 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530261 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts\") pod \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530327 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn\") pod \"c8a888c5-e13a-49df-b58c-452d81afc3be\" (UID: \"c8a888c5-e13a-49df-b58c-452d81afc3be\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.530420 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml7d6\" (UniqueName: \"kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6\") pod \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\" (UID: \"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675\") " Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.531012 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z89q\" (UniqueName: \"kubernetes.io/projected/dc0c742d-3c3d-4020-89a8-53453beeab23-kube-api-access-6z89q\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.531052 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.531072 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fb6k\" (UniqueName: \"kubernetes.io/projected/1e37f2d0-b9ea-4e9a-8259-0be89e132c64-kube-api-access-9fb6k\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.531091 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0c742d-3c3d-4020-89a8-53453beeab23-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.531594 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts" (OuterVolumeSpecName: "scripts") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.532534 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run" (OuterVolumeSpecName: "var-run") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.532786 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.532824 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.532839 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.533042 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" (UID: "0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.534279 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7" (OuterVolumeSpecName: "kube-api-access-bn2g7") pod "c8a888c5-e13a-49df-b58c-452d81afc3be" (UID: "c8a888c5-e13a-49df-b58c-452d81afc3be"). InnerVolumeSpecName "kube-api-access-bn2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.535513 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6" (OuterVolumeSpecName: "kube-api-access-ml7d6") pod "0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" (UID: "0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675"). InnerVolumeSpecName "kube-api-access-ml7d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632534 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml7d6\" (UniqueName: \"kubernetes.io/projected/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-kube-api-access-ml7d6\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632585 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn2g7\" (UniqueName: \"kubernetes.io/projected/c8a888c5-e13a-49df-b58c-452d81afc3be-kube-api-access-bn2g7\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632601 4765 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632618 4765 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632632 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8a888c5-e13a-49df-b58c-452d81afc3be-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632647 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632659 4765 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.632671 4765 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8a888c5-e13a-49df-b58c-452d81afc3be-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:44 crc kubenswrapper[4765]: I1203 20:54:44.997760 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49dvf" event={"ID":"b5b83af8-4f5b-405c-9961-3f37c37ee18b","Type":"ContainerStarted","Data":"533977e086adcb5d40ca46b8538d89a7757cdf761f5f88474e1370b0f85cf5bc"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.011489 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f2xv7" event={"ID":"1e37f2d0-b9ea-4e9a-8259-0be89e132c64","Type":"ContainerDied","Data":"90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.011594 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e2b7dcd2bc6f122818e262145f09d01a2e7151b499efca452aae32f86faac4" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.011696 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f2xv7" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.016130 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7c91-account-create-update-7cstw" event={"ID":"1959025f-e3a0-42a9-b4f7-c55151f34a91","Type":"ContainerDied","Data":"91bcf5930a795ba393fe362cbe1e05497a98bac6f2a4867cfef5331bf45acf96"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.016184 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bcf5930a795ba393fe362cbe1e05497a98bac6f2a4867cfef5331bf45acf96" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.016247 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7c91-account-create-update-7cstw" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.018959 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7dec-account-create-update-zdgw6" event={"ID":"6ab27ead-a53a-41ff-8f57-7fd8e0da7e21","Type":"ContainerDied","Data":"39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.019002 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e0fbfa125300a50b40f837b35466747e4620e730388e189a0c320382cdcc4a" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.019058 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7dec-account-create-update-zdgw6" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.021908 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r84p" event={"ID":"0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675","Type":"ContainerDied","Data":"19d5e0131fd0fc233312c1c497f5d7cccbbaec476d2f3364cc5549b15fe929cf"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.021962 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19d5e0131fd0fc233312c1c497f5d7cccbbaec476d2f3364cc5549b15fe929cf" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.022057 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r84p" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.033101 4765 generic.go:334] "Generic (PLEG): container finished" podID="80088b6b-01d5-403b-b051-fd7defbee240" containerID="462f835b9cddd802be5c5430dda21dc715fe276aa6fbf56cce397ee722285538" exitCode=0 Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.033207 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mllvt" event={"ID":"80088b6b-01d5-403b-b051-fd7defbee240","Type":"ContainerDied","Data":"462f835b9cddd802be5c5430dda21dc715fe276aa6fbf56cce397ee722285538"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.037584 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-49dvf" podStartSLOduration=2.723261017 podStartE2EDuration="10.037562196s" podCreationTimestamp="2025-12-03 20:54:35 +0000 UTC" firstStartedPulling="2025-12-03 20:54:36.773024958 +0000 UTC m=+974.703570109" lastFinishedPulling="2025-12-03 20:54:44.087326117 +0000 UTC m=+982.017871288" observedRunningTime="2025-12-03 20:54:45.029825377 +0000 UTC m=+982.960370528" watchObservedRunningTime="2025-12-03 20:54:45.037562196 +0000 UTC m=+982.968107347" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.040217 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q9s77" event={"ID":"dc0c742d-3c3d-4020-89a8-53453beeab23","Type":"ContainerDied","Data":"8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.040331 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q9s77" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.040367 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb38d3aba61d1312c7d2cb68af4165c5ae342d7127c1fcb1be75a2e31917dea" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.042185 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd63-account-create-update-rkkhn" event={"ID":"42e811ab-603b-493f-8df9-9e00de2b9cef","Type":"ContainerDied","Data":"d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.042215 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7297da6959d9decef684835b9ab1e0942bcb1148915bc002b12c6481e832b01" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.042265 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd63-account-create-update-rkkhn" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.044412 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f85pk-config-5dfwg" event={"ID":"c8a888c5-e13a-49df-b58c-452d81afc3be","Type":"ContainerDied","Data":"11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf"} Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.044439 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11032950df1f7ebd211181eb33e843d1c8f9052ca6291671ccf4fb56eb1ce0cf" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.044469 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f85pk-config-5dfwg" Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.480398 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-f85pk-config-5dfwg"] Dec 03 20:54:45 crc kubenswrapper[4765]: I1203 20:54:45.488938 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-f85pk-config-5dfwg"] Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.380503 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a888c5-e13a-49df-b58c-452d81afc3be" path="/var/lib/kubelet/pods/c8a888c5-e13a-49df-b58c-452d81afc3be/volumes" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.472242 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.601392 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle\") pod \"80088b6b-01d5-403b-b051-fd7defbee240\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.601482 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data\") pod \"80088b6b-01d5-403b-b051-fd7defbee240\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.601539 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfwtx\" (UniqueName: \"kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx\") pod \"80088b6b-01d5-403b-b051-fd7defbee240\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.601617 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data\") pod \"80088b6b-01d5-403b-b051-fd7defbee240\" (UID: \"80088b6b-01d5-403b-b051-fd7defbee240\") " Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.607043 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80088b6b-01d5-403b-b051-fd7defbee240" (UID: "80088b6b-01d5-403b-b051-fd7defbee240"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.607740 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx" (OuterVolumeSpecName: "kube-api-access-wfwtx") pod "80088b6b-01d5-403b-b051-fd7defbee240" (UID: "80088b6b-01d5-403b-b051-fd7defbee240"). InnerVolumeSpecName "kube-api-access-wfwtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.628641 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80088b6b-01d5-403b-b051-fd7defbee240" (UID: "80088b6b-01d5-403b-b051-fd7defbee240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.663808 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data" (OuterVolumeSpecName: "config-data") pod "80088b6b-01d5-403b-b051-fd7defbee240" (UID: "80088b6b-01d5-403b-b051-fd7defbee240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.703259 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.703328 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.703345 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80088b6b-01d5-403b-b051-fd7defbee240-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:46 crc kubenswrapper[4765]: I1203 20:54:46.703358 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfwtx\" (UniqueName: \"kubernetes.io/projected/80088b6b-01d5-403b-b051-fd7defbee240-kube-api-access-wfwtx\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.062588 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mllvt" event={"ID":"80088b6b-01d5-403b-b051-fd7defbee240","Type":"ContainerDied","Data":"b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc"} Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.062628 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b61f83871e94b468f0f8bd9f64ee7cfbc834a212e8546897b408b5180fcaf5bc" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.062670 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mllvt" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.454865 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455286 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80088b6b-01d5-403b-b051-fd7defbee240" containerName="glance-db-sync" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455332 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="80088b6b-01d5-403b-b051-fd7defbee240" containerName="glance-db-sync" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455344 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0c742d-3c3d-4020-89a8-53453beeab23" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455351 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0c742d-3c3d-4020-89a8-53453beeab23" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455373 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455382 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455395 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e811ab-603b-493f-8df9-9e00de2b9cef" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455403 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e811ab-603b-493f-8df9-9e00de2b9cef" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455415 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a888c5-e13a-49df-b58c-452d81afc3be" containerName="ovn-config" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455422 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a888c5-e13a-49df-b58c-452d81afc3be" containerName="ovn-config" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455436 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455444 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455464 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e37f2d0-b9ea-4e9a-8259-0be89e132c64" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455471 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e37f2d0-b9ea-4e9a-8259-0be89e132c64" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: E1203 20:54:47.455495 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1959025f-e3a0-42a9-b4f7-c55151f34a91" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455503 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1959025f-e3a0-42a9-b4f7-c55151f34a91" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455682 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455698 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0c742d-3c3d-4020-89a8-53453beeab23" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455725 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e37f2d0-b9ea-4e9a-8259-0be89e132c64" containerName="mariadb-database-create" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455737 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="80088b6b-01d5-403b-b051-fd7defbee240" containerName="glance-db-sync" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455748 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a888c5-e13a-49df-b58c-452d81afc3be" containerName="ovn-config" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455759 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1959025f-e3a0-42a9-b4f7-c55151f34a91" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455774 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e811ab-603b-493f-8df9-9e00de2b9cef" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.455785 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" containerName="mariadb-account-create-update" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.456777 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.481854 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.518605 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.518669 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.518711 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sh9q\" (UniqueName: \"kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.518743 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.518783 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.620489 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.620572 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sh9q\" (UniqueName: \"kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.620606 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.620651 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.620733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.621777 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.621965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.622499 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.622720 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.644053 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sh9q\" (UniqueName: \"kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q\") pod \"dnsmasq-dns-554567b4f7-p7fxn\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:47 crc kubenswrapper[4765]: I1203 20:54:47.780250 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:48 crc kubenswrapper[4765]: I1203 20:54:48.232526 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:49 crc kubenswrapper[4765]: I1203 20:54:49.077612 4765 generic.go:334] "Generic (PLEG): container finished" podID="b5b83af8-4f5b-405c-9961-3f37c37ee18b" containerID="533977e086adcb5d40ca46b8538d89a7757cdf761f5f88474e1370b0f85cf5bc" exitCode=0 Dec 03 20:54:49 crc kubenswrapper[4765]: I1203 20:54:49.077721 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49dvf" event={"ID":"b5b83af8-4f5b-405c-9961-3f37c37ee18b","Type":"ContainerDied","Data":"533977e086adcb5d40ca46b8538d89a7757cdf761f5f88474e1370b0f85cf5bc"} Dec 03 20:54:49 crc kubenswrapper[4765]: I1203 20:54:49.081199 4765 generic.go:334] "Generic (PLEG): container finished" podID="c203df1f-37eb-4061-9383-abf28e8668c6" containerID="c41a54b14202054f6af0018a8e417b5211685132fc2dd5664a810976101f4c34" exitCode=0 Dec 03 20:54:49 crc kubenswrapper[4765]: I1203 20:54:49.081238 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" event={"ID":"c203df1f-37eb-4061-9383-abf28e8668c6","Type":"ContainerDied","Data":"c41a54b14202054f6af0018a8e417b5211685132fc2dd5664a810976101f4c34"} Dec 03 20:54:49 crc kubenswrapper[4765]: I1203 20:54:49.081267 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" event={"ID":"c203df1f-37eb-4061-9383-abf28e8668c6","Type":"ContainerStarted","Data":"45ca71e807b759d9b1dbc0d90893b08217611c61b584a5c16ddb10c95b7ea2de"} Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.094896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" event={"ID":"c203df1f-37eb-4061-9383-abf28e8668c6","Type":"ContainerStarted","Data":"bd17057156e111f9c97bb1a898a4ec8b62ad53e2b8e5f3d7701aa715733b9b3b"} Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.123472 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" podStartSLOduration=3.123455729 podStartE2EDuration="3.123455729s" podCreationTimestamp="2025-12-03 20:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:50.117771966 +0000 UTC m=+988.048317137" watchObservedRunningTime="2025-12-03 20:54:50.123455729 +0000 UTC m=+988.054000880" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.445050 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.569837 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data\") pod \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.569995 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle\") pod \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.570029 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mztsl\" (UniqueName: \"kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl\") pod \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\" (UID: \"b5b83af8-4f5b-405c-9961-3f37c37ee18b\") " Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.575592 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl" (OuterVolumeSpecName: "kube-api-access-mztsl") pod "b5b83af8-4f5b-405c-9961-3f37c37ee18b" (UID: "b5b83af8-4f5b-405c-9961-3f37c37ee18b"). InnerVolumeSpecName "kube-api-access-mztsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.594187 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5b83af8-4f5b-405c-9961-3f37c37ee18b" (UID: "b5b83af8-4f5b-405c-9961-3f37c37ee18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.613452 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data" (OuterVolumeSpecName: "config-data") pod "b5b83af8-4f5b-405c-9961-3f37c37ee18b" (UID: "b5b83af8-4f5b-405c-9961-3f37c37ee18b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.671999 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.672047 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b83af8-4f5b-405c-9961-3f37c37ee18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:50 crc kubenswrapper[4765]: I1203 20:54:50.672071 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mztsl\" (UniqueName: \"kubernetes.io/projected/b5b83af8-4f5b-405c-9961-3f37c37ee18b-kube-api-access-mztsl\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.106861 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49dvf" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.106948 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49dvf" event={"ID":"b5b83af8-4f5b-405c-9961-3f37c37ee18b","Type":"ContainerDied","Data":"948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d"} Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.106980 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948a7b34a29f95ca57731062e13310f51c2f04e25abefa08fa4ef96638a39b6d" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.107208 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.358570 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.394857 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:51 crc kubenswrapper[4765]: E1203 20:54:51.395276 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b83af8-4f5b-405c-9961-3f37c37ee18b" containerName="keystone-db-sync" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.395294 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b83af8-4f5b-405c-9961-3f37c37ee18b" containerName="keystone-db-sync" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.395534 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b83af8-4f5b-405c-9961-3f37c37ee18b" containerName="keystone-db-sync" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.396556 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.410517 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.484592 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.484658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.484690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.484734 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.484801 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298v4\" (UniqueName: \"kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.497190 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bqn5b"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.498174 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.501943 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.502092 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.502182 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bs5mp" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.502262 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.502558 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.517998 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bqn5b"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.585754 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.585815 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.585963 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586015 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586051 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586116 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586139 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586208 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8sh\" (UniqueName: \"kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586288 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298v4\" (UniqueName: \"kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586394 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586435 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.586703 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.587045 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.587159 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.587553 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.618342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298v4\" (UniqueName: \"kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4\") pod \"dnsmasq-dns-67795cd9-7mbbg\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.666823 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tnnwc"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.668364 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.675416 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tnnwc"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.676973 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.677943 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.678226 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rwgq2" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.687924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.687974 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8sh\" (UniqueName: \"kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.688029 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.688057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.688079 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.688109 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.692641 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.692845 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.699864 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.700595 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.704645 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.713533 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8sh\" (UniqueName: \"kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh\") pod \"keystone-bootstrap-bqn5b\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.724709 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.726979 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.728832 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.733170 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.735495 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.752879 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789586 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789635 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789667 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789709 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789730 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789753 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789783 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vfv\" (UniqueName: \"kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789888 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrltt\" (UniqueName: \"kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789959 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789980 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.789994 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.821978 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.839993 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nzkps"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.841186 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.844972 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nnc6n" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.876661 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.876761 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7slsb"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.887660 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.906498 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.906923 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907432 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907492 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907602 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907708 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9vfv\" (UniqueName: \"kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.907967 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrltt\" (UniqueName: \"kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908055 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908134 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908236 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908336 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908423 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.908840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.909123 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.912581 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7sk2h" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.914498 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.919395 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzkps"] Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.941547 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.942723 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.943065 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:51 crc kubenswrapper[4765]: I1203 20:54:51.994373 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.004981 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.005356 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.005914 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.006400 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.007865 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrltt\" (UniqueName: \"kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt\") pod \"ceilometer-0\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " pod="openstack/ceilometer-0" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.012250 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv266\" (UniqueName: \"kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.012572 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmkr\" (UniqueName: \"kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.012987 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.013024 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.013041 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.013060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.056943 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9vfv\" (UniqueName: \"kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.065775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts\") pod \"cinder-db-sync-tnnwc\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.080176 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7slsb"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.116074 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv266\" (UniqueName: \"kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.122685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bmkr\" (UniqueName: \"kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.123343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.123536 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.123627 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.123709 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.131004 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.133424 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.141691 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f5tgv"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.144812 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.158791 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv266\" (UniqueName: \"kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.170097 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.170555 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.171453 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.171748 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9ptdk" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.172729 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f5tgv"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.176779 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle\") pod \"neutron-db-sync-nzkps\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.184006 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bmkr\" (UniqueName: \"kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr\") pod \"barbican-db-sync-7slsb\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.223774 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.233418 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkscz\" (UniqueName: \"kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.233505 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.233561 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.233606 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.233678 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.259971 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.272659 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.274270 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.285266 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.285617 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.319527 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.353133 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.353789 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkscz\" (UniqueName: \"kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.353844 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.353883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.353911 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.366044 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.373523 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.380662 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.391749 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.407876 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.410184 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkscz\" (UniqueName: \"kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz\") pod \"placement-db-sync-f5tgv\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.418791 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzkps" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.436679 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.445526 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7slsb" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.469176 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.469233 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.469252 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77hg\" (UniqueName: \"kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.469369 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.469437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.471707 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f5tgv" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.570929 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.571983 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.572003 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.572045 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77hg\" (UniqueName: \"kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.572060 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.572655 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.571898 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.573187 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.573766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.593854 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77hg\" (UniqueName: \"kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg\") pod \"dnsmasq-dns-5b6dbdb6f5-kbnbm\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.619294 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.728653 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bqn5b"] Dec 03 20:54:52 crc kubenswrapper[4765]: W1203 20:54:52.739248 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod131fee95_3ed8_40bb_b358_3528c69f9644.slice/crio-40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22 WatchSource:0}: Error finding container 40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22: Status 404 returned error can't find the container with id 40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22 Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.806502 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:54:52 crc kubenswrapper[4765]: W1203 20:54:52.814135 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58b2d3ec_4f0c_4186_8d4d_301ba578af34.slice/crio-637f39035914d0edb7b7f2b7d556265af1b75ac88fd58aee1b48eee16d76ccc9 WatchSource:0}: Error finding container 637f39035914d0edb7b7f2b7d556265af1b75ac88fd58aee1b48eee16d76ccc9: Status 404 returned error can't find the container with id 637f39035914d0edb7b7f2b7d556265af1b75ac88fd58aee1b48eee16d76ccc9 Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.957288 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzkps"] Dec 03 20:54:52 crc kubenswrapper[4765]: W1203 20:54:52.965277 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eaeb80c_f6b8_48bb_80a4_3f43623cfc13.slice/crio-b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a WatchSource:0}: Error finding container b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a: Status 404 returned error can't find the container with id b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a Dec 03 20:54:52 crc kubenswrapper[4765]: I1203 20:54:52.967340 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tnnwc"] Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.157991 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7slsb"] Dec 03 20:54:53 crc kubenswrapper[4765]: W1203 20:54:53.160484 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c3f8651_39c3_450e_9da1_06ad1dc357a7.slice/crio-fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542 WatchSource:0}: Error finding container fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542: Status 404 returned error can't find the container with id fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542 Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.190341 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f5tgv"] Dec 03 20:54:53 crc kubenswrapper[4765]: W1203 20:54:53.197783 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9150bf8_239d_4d51_bc11_81e118eb19f1.slice/crio-684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9 WatchSource:0}: Error finding container 684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9: Status 404 returned error can't find the container with id 684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9 Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.216423 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerStarted","Data":"637f39035914d0edb7b7f2b7d556265af1b75ac88fd58aee1b48eee16d76ccc9"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.219142 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tnnwc" event={"ID":"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13","Type":"ContainerStarted","Data":"b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.224530 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7slsb" event={"ID":"2c3f8651-39c3-450e-9da1-06ad1dc357a7","Type":"ContainerStarted","Data":"fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.226939 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzkps" event={"ID":"11ecb436-7651-4a8f-a9b8-5f476df8161d","Type":"ContainerStarted","Data":"bbb4301f73f9741dd5626b5cf895e1ff79ce5b87f99e9d744fd78ee04f4370a2"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.226985 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzkps" event={"ID":"11ecb436-7651-4a8f-a9b8-5f476df8161d","Type":"ContainerStarted","Data":"b6a1ffa0d9282d9024dc2d2fd7e91b1fb17b46c57bc523c205ddf819d51fba50"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.231923 4765 generic.go:334] "Generic (PLEG): container finished" podID="c5aa5eba-8483-4326-ba07-43935526ec3c" containerID="fa5a7e6d098889983a4866db1bcfe596ff886e945f74397442ea54bf8d9fa797" exitCode=0 Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.232110 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" event={"ID":"c5aa5eba-8483-4326-ba07-43935526ec3c","Type":"ContainerDied","Data":"fa5a7e6d098889983a4866db1bcfe596ff886e945f74397442ea54bf8d9fa797"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.232136 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" event={"ID":"c5aa5eba-8483-4326-ba07-43935526ec3c","Type":"ContainerStarted","Data":"6c05f47e47b2b3807622262ead25bcda7c95bcd4c7e988cd62121e66738678d9"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.234926 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqn5b" event={"ID":"131fee95-3ed8-40bb-b358-3528c69f9644","Type":"ContainerStarted","Data":"63cbf044741bdea1536246541c6615f5d238a3bd84f19ca23ae9fb8be94ec921"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.234968 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqn5b" event={"ID":"131fee95-3ed8-40bb-b358-3528c69f9644","Type":"ContainerStarted","Data":"40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.238073 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8k76" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="registry-server" containerID="cri-o://33e07c00ad8c3fd8ab803bec92002c24de6c0efb65a938a04ad5c06bb85e2238" gracePeriod=2 Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.238309 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f5tgv" event={"ID":"b9150bf8-239d-4d51-bc11-81e118eb19f1","Type":"ContainerStarted","Data":"684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9"} Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.238421 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="dnsmasq-dns" containerID="cri-o://bd17057156e111f9c97bb1a898a4ec8b62ad53e2b8e5f3d7701aa715733b9b3b" gracePeriod=10 Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.247915 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nzkps" podStartSLOduration=2.247899665 podStartE2EDuration="2.247899665s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:53.242652463 +0000 UTC m=+991.173197614" watchObservedRunningTime="2025-12-03 20:54:53.247899665 +0000 UTC m=+991.178444816" Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.290122 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bqn5b" podStartSLOduration=2.290096381 podStartE2EDuration="2.290096381s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:53.259806925 +0000 UTC m=+991.190352126" watchObservedRunningTime="2025-12-03 20:54:53.290096381 +0000 UTC m=+991.220641532" Dec 03 20:54:53 crc kubenswrapper[4765]: I1203 20:54:53.353420 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:54:54 crc kubenswrapper[4765]: W1203 20:54:53.414925 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8379126d_cb93_4700_81d6_393779d0a726.slice/crio-4b1cd026869288ef762e9311fea98a1d47cfaffbdea7e15640fdeb81bb28deb4 WatchSource:0}: Error finding container 4b1cd026869288ef762e9311fea98a1d47cfaffbdea7e15640fdeb81bb28deb4: Status 404 returned error can't find the container with id 4b1cd026869288ef762e9311fea98a1d47cfaffbdea7e15640fdeb81bb28deb4 Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.594719 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.697310 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb\") pod \"c5aa5eba-8483-4326-ba07-43935526ec3c\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.697345 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc\") pod \"c5aa5eba-8483-4326-ba07-43935526ec3c\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.697423 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config\") pod \"c5aa5eba-8483-4326-ba07-43935526ec3c\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.697547 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb\") pod \"c5aa5eba-8483-4326-ba07-43935526ec3c\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.697633 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-298v4\" (UniqueName: \"kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4\") pod \"c5aa5eba-8483-4326-ba07-43935526ec3c\" (UID: \"c5aa5eba-8483-4326-ba07-43935526ec3c\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.709782 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4" (OuterVolumeSpecName: "kube-api-access-298v4") pod "c5aa5eba-8483-4326-ba07-43935526ec3c" (UID: "c5aa5eba-8483-4326-ba07-43935526ec3c"). InnerVolumeSpecName "kube-api-access-298v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.732705 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5aa5eba-8483-4326-ba07-43935526ec3c" (UID: "c5aa5eba-8483-4326-ba07-43935526ec3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.736956 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5aa5eba-8483-4326-ba07-43935526ec3c" (UID: "c5aa5eba-8483-4326-ba07-43935526ec3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.740653 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config" (OuterVolumeSpecName: "config") pod "c5aa5eba-8483-4326-ba07-43935526ec3c" (UID: "c5aa5eba-8483-4326-ba07-43935526ec3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.749203 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5aa5eba-8483-4326-ba07-43935526ec3c" (UID: "c5aa5eba-8483-4326-ba07-43935526ec3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.802284 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.802347 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-298v4\" (UniqueName: \"kubernetes.io/projected/c5aa5eba-8483-4326-ba07-43935526ec3c-kube-api-access-298v4\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.802359 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.802368 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:53.802376 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5aa5eba-8483-4326-ba07-43935526ec3c-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.257764 4765 generic.go:334] "Generic (PLEG): container finished" podID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerID="33e07c00ad8c3fd8ab803bec92002c24de6c0efb65a938a04ad5c06bb85e2238" exitCode=0 Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.257769 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerDied","Data":"33e07c00ad8c3fd8ab803bec92002c24de6c0efb65a938a04ad5c06bb85e2238"} Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.262975 4765 generic.go:334] "Generic (PLEG): container finished" podID="8379126d-cb93-4700-81d6-393779d0a726" containerID="a2aacef8a909715f1120de7e42e9ba10154f682fa41ddb9ddb83536eab103d36" exitCode=0 Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.263075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" event={"ID":"8379126d-cb93-4700-81d6-393779d0a726","Type":"ContainerDied","Data":"a2aacef8a909715f1120de7e42e9ba10154f682fa41ddb9ddb83536eab103d36"} Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.263100 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" event={"ID":"8379126d-cb93-4700-81d6-393779d0a726","Type":"ContainerStarted","Data":"4b1cd026869288ef762e9311fea98a1d47cfaffbdea7e15640fdeb81bb28deb4"} Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.325179 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" event={"ID":"c5aa5eba-8483-4326-ba07-43935526ec3c","Type":"ContainerDied","Data":"6c05f47e47b2b3807622262ead25bcda7c95bcd4c7e988cd62121e66738678d9"} Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.325228 4765 scope.go:117] "RemoveContainer" containerID="fa5a7e6d098889983a4866db1bcfe596ff886e945f74397442ea54bf8d9fa797" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.325390 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-7mbbg" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.352834 4765 generic.go:334] "Generic (PLEG): container finished" podID="c203df1f-37eb-4061-9383-abf28e8668c6" containerID="bd17057156e111f9c97bb1a898a4ec8b62ad53e2b8e5f3d7701aa715733b9b3b" exitCode=0 Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.353074 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" event={"ID":"c203df1f-37eb-4061-9383-abf28e8668c6","Type":"ContainerDied","Data":"bd17057156e111f9c97bb1a898a4ec8b62ad53e2b8e5f3d7701aa715733b9b3b"} Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.432828 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.444083 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-7mbbg"] Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.749380 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.753686 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.799595 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.799652 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.926880 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content\") pod \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.926931 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb\") pod \"c203df1f-37eb-4061-9383-abf28e8668c6\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.926987 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities\") pod \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.927021 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ccw\" (UniqueName: \"kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw\") pod \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\" (UID: \"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.927251 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sh9q\" (UniqueName: \"kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q\") pod \"c203df1f-37eb-4061-9383-abf28e8668c6\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.927319 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config\") pod \"c203df1f-37eb-4061-9383-abf28e8668c6\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.927348 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc\") pod \"c203df1f-37eb-4061-9383-abf28e8668c6\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.927363 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb\") pod \"c203df1f-37eb-4061-9383-abf28e8668c6\" (UID: \"c203df1f-37eb-4061-9383-abf28e8668c6\") " Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.930417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities" (OuterVolumeSpecName: "utilities") pod "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" (UID: "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.954584 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw" (OuterVolumeSpecName: "kube-api-access-d9ccw") pod "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" (UID: "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a"). InnerVolumeSpecName "kube-api-access-d9ccw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:54 crc kubenswrapper[4765]: I1203 20:54:54.985483 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q" (OuterVolumeSpecName: "kube-api-access-8sh9q") pod "c203df1f-37eb-4061-9383-abf28e8668c6" (UID: "c203df1f-37eb-4061-9383-abf28e8668c6"). InnerVolumeSpecName "kube-api-access-8sh9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.029324 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sh9q\" (UniqueName: \"kubernetes.io/projected/c203df1f-37eb-4061-9383-abf28e8668c6-kube-api-access-8sh9q\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.029500 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.029555 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ccw\" (UniqueName: \"kubernetes.io/projected/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-kube-api-access-d9ccw\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.031176 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" (UID: "4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.054000 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c203df1f-37eb-4061-9383-abf28e8668c6" (UID: "c203df1f-37eb-4061-9383-abf28e8668c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.080156 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c203df1f-37eb-4061-9383-abf28e8668c6" (UID: "c203df1f-37eb-4061-9383-abf28e8668c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.080196 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c203df1f-37eb-4061-9383-abf28e8668c6" (UID: "c203df1f-37eb-4061-9383-abf28e8668c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.088432 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config" (OuterVolumeSpecName: "config") pod "c203df1f-37eb-4061-9383-abf28e8668c6" (UID: "c203df1f-37eb-4061-9383-abf28e8668c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.131252 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.131291 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.131316 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.131323 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.131331 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c203df1f-37eb-4061-9383-abf28e8668c6-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.372744 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" event={"ID":"8379126d-cb93-4700-81d6-393779d0a726","Type":"ContainerStarted","Data":"aeed2072533078893562fd735c2c3ab0b73b847e828290bb733eaa65557d7d91"} Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.373133 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.376492 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.376506 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-p7fxn" event={"ID":"c203df1f-37eb-4061-9383-abf28e8668c6","Type":"ContainerDied","Data":"45ca71e807b759d9b1dbc0d90893b08217611c61b584a5c16ddb10c95b7ea2de"} Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.376545 4765 scope.go:117] "RemoveContainer" containerID="bd17057156e111f9c97bb1a898a4ec8b62ad53e2b8e5f3d7701aa715733b9b3b" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.381098 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8k76" event={"ID":"4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a","Type":"ContainerDied","Data":"0d8a82624570b63f6153869b9f4ff4cd62218dfc7ef0ec70068d2af32ca53abf"} Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.381185 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8k76" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.394191 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" podStartSLOduration=4.394172839 podStartE2EDuration="4.394172839s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:54:55.390772637 +0000 UTC m=+993.321317798" watchObservedRunningTime="2025-12-03 20:54:55.394172839 +0000 UTC m=+993.324717990" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.404325 4765 scope.go:117] "RemoveContainer" containerID="c41a54b14202054f6af0018a8e417b5211685132fc2dd5664a810976101f4c34" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.439979 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.458573 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8k76"] Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.463625 4765 scope.go:117] "RemoveContainer" containerID="33e07c00ad8c3fd8ab803bec92002c24de6c0efb65a938a04ad5c06bb85e2238" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.469479 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.478908 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-p7fxn"] Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.504570 4765 scope.go:117] "RemoveContainer" containerID="0df15d55e6d4dd35221f416841152f695799862f1f10b036c02a47066ad61a78" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.556738 4765 scope.go:117] "RemoveContainer" containerID="98b57177d49b1bc6d153f7cba0c4175b43159fd937fde38882db80c11e6bfe07" Dec 03 20:54:55 crc kubenswrapper[4765]: I1203 20:54:55.829942 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:54:56 crc kubenswrapper[4765]: I1203 20:54:56.374611 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" path="/var/lib/kubelet/pods/4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a/volumes" Dec 03 20:54:56 crc kubenswrapper[4765]: I1203 20:54:56.377113 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" path="/var/lib/kubelet/pods/c203df1f-37eb-4061-9383-abf28e8668c6/volumes" Dec 03 20:54:56 crc kubenswrapper[4765]: I1203 20:54:56.378283 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5aa5eba-8483-4326-ba07-43935526ec3c" path="/var/lib/kubelet/pods/c5aa5eba-8483-4326-ba07-43935526ec3c/volumes" Dec 03 20:54:57 crc kubenswrapper[4765]: I1203 20:54:57.406840 4765 generic.go:334] "Generic (PLEG): container finished" podID="131fee95-3ed8-40bb-b358-3528c69f9644" containerID="63cbf044741bdea1536246541c6615f5d238a3bd84f19ca23ae9fb8be94ec921" exitCode=0 Dec 03 20:54:57 crc kubenswrapper[4765]: I1203 20:54:57.406880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqn5b" event={"ID":"131fee95-3ed8-40bb-b358-3528c69f9644","Type":"ContainerDied","Data":"63cbf044741bdea1536246541c6615f5d238a3bd84f19ca23ae9fb8be94ec921"} Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.256556 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.339924 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.340488 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8sh\" (UniqueName: \"kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.340529 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.340556 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.340571 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.340618 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys\") pod \"131fee95-3ed8-40bb-b358-3528c69f9644\" (UID: \"131fee95-3ed8-40bb-b358-3528c69f9644\") " Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.356649 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh" (OuterVolumeSpecName: "kube-api-access-dv8sh") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "kube-api-access-dv8sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.358167 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.360508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts" (OuterVolumeSpecName: "scripts") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.368913 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.398701 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.423524 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data" (OuterVolumeSpecName: "config-data") pod "131fee95-3ed8-40bb-b358-3528c69f9644" (UID: "131fee95-3ed8-40bb-b358-3528c69f9644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442401 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442435 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442447 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442457 4765 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442470 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131fee95-3ed8-40bb-b358-3528c69f9644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.442480 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8sh\" (UniqueName: \"kubernetes.io/projected/131fee95-3ed8-40bb-b358-3528c69f9644-kube-api-access-dv8sh\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.447522 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqn5b" Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.461565 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqn5b" event={"ID":"131fee95-3ed8-40bb-b358-3528c69f9644","Type":"ContainerDied","Data":"40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22"} Dec 03 20:55:00 crc kubenswrapper[4765]: I1203 20:55:00.461604 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40fb191691d887c0a7ea11ac377e26e40e826def9aa5157a345b104b29199f22" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.346208 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bqn5b"] Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.354054 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bqn5b"] Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434551 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rq8cg"] Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434867 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131fee95-3ed8-40bb-b358-3528c69f9644" containerName="keystone-bootstrap" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434884 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="131fee95-3ed8-40bb-b358-3528c69f9644" containerName="keystone-bootstrap" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434894 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5aa5eba-8483-4326-ba07-43935526ec3c" containerName="init" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434901 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5aa5eba-8483-4326-ba07-43935526ec3c" containerName="init" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434914 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="extract-content" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434921 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="extract-content" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434937 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="dnsmasq-dns" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434942 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="dnsmasq-dns" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434951 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="init" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434957 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="init" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434968 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="extract-utilities" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434974 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="extract-utilities" Dec 03 20:55:01 crc kubenswrapper[4765]: E1203 20:55:01.434981 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="registry-server" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.434987 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="registry-server" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.435119 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5aa5eba-8483-4326-ba07-43935526ec3c" containerName="init" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.435132 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="131fee95-3ed8-40bb-b358-3528c69f9644" containerName="keystone-bootstrap" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.435145 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4096f6f7-20e2-4ee1-8cbf-c6473ba8ab6a" containerName="registry-server" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.435155 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c203df1f-37eb-4061-9383-abf28e8668c6" containerName="dnsmasq-dns" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.436014 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.442640 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.442889 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.443040 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.443461 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.443461 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bs5mp" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.451613 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rq8cg"] Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.588737 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.588785 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.588806 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.588845 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.588868 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.589043 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cms4p\" (UniqueName: \"kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.690915 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.690971 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.691025 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.691052 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.691102 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cms4p\" (UniqueName: \"kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.691205 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.696071 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.696527 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.697096 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.697512 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.697766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.706880 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cms4p\" (UniqueName: \"kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p\") pod \"keystone-bootstrap-rq8cg\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:01 crc kubenswrapper[4765]: I1203 20:55:01.810236 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:02 crc kubenswrapper[4765]: I1203 20:55:02.372724 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131fee95-3ed8-40bb-b358-3528c69f9644" path="/var/lib/kubelet/pods/131fee95-3ed8-40bb-b358-3528c69f9644/volumes" Dec 03 20:55:02 crc kubenswrapper[4765]: I1203 20:55:02.621416 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:55:02 crc kubenswrapper[4765]: I1203 20:55:02.712800 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:55:02 crc kubenswrapper[4765]: I1203 20:55:02.713037 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-tskk6" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" containerID="cri-o://4adda8522a70a47d37db5d1f4816fda7d06778fa3cf88a2519c43041d335f1c2" gracePeriod=10 Dec 03 20:55:03 crc kubenswrapper[4765]: I1203 20:55:03.482626 4765 generic.go:334] "Generic (PLEG): container finished" podID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerID="4adda8522a70a47d37db5d1f4816fda7d06778fa3cf88a2519c43041d335f1c2" exitCode=0 Dec 03 20:55:03 crc kubenswrapper[4765]: I1203 20:55:03.482670 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerDied","Data":"4adda8522a70a47d37db5d1f4816fda7d06778fa3cf88a2519c43041d335f1c2"} Dec 03 20:55:07 crc kubenswrapper[4765]: E1203 20:55:07.626001 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 03 20:55:07 crc kubenswrapper[4765]: E1203 20:55:07.626603 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bmkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-7slsb_openstack(2c3f8651-39c3-450e-9da1-06ad1dc357a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:55:07 crc kubenswrapper[4765]: E1203 20:55:07.627786 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-7slsb" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" Dec 03 20:55:08 crc kubenswrapper[4765]: E1203 20:55:08.532046 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-7slsb" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" Dec 03 20:55:10 crc kubenswrapper[4765]: I1203 20:55:10.253484 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-tskk6" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.254119 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-tskk6" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.597503 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-tskk6" event={"ID":"f893cff2-cfbc-4c12-9781-c2d6a7f3905f","Type":"ContainerDied","Data":"c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990"} Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.597558 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ec6e870f73145b5be0ca28d83a7c6af29b08e61ecc408003d4b0f996081990" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.624214 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.648612 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config\") pod \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.648777 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb\") pod \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.649632 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc\") pod \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.649662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb\") pod \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.649682 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmt7\" (UniqueName: \"kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7\") pod \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\" (UID: \"f893cff2-cfbc-4c12-9781-c2d6a7f3905f\") " Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.666927 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7" (OuterVolumeSpecName: "kube-api-access-mxmt7") pod "f893cff2-cfbc-4c12-9781-c2d6a7f3905f" (UID: "f893cff2-cfbc-4c12-9781-c2d6a7f3905f"). InnerVolumeSpecName "kube-api-access-mxmt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.715546 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f893cff2-cfbc-4c12-9781-c2d6a7f3905f" (UID: "f893cff2-cfbc-4c12-9781-c2d6a7f3905f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.717990 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f893cff2-cfbc-4c12-9781-c2d6a7f3905f" (UID: "f893cff2-cfbc-4c12-9781-c2d6a7f3905f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.724461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config" (OuterVolumeSpecName: "config") pod "f893cff2-cfbc-4c12-9781-c2d6a7f3905f" (UID: "f893cff2-cfbc-4c12-9781-c2d6a7f3905f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.725257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f893cff2-cfbc-4c12-9781-c2d6a7f3905f" (UID: "f893cff2-cfbc-4c12-9781-c2d6a7f3905f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.752730 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.752795 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.752811 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.752820 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:15 crc kubenswrapper[4765]: I1203 20:55:15.752832 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmt7\" (UniqueName: \"kubernetes.io/projected/f893cff2-cfbc-4c12-9781-c2d6a7f3905f-kube-api-access-mxmt7\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:16 crc kubenswrapper[4765]: I1203 20:55:16.610114 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-tskk6" Dec 03 20:55:16 crc kubenswrapper[4765]: I1203 20:55:16.634068 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:55:16 crc kubenswrapper[4765]: I1203 20:55:16.641083 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-tskk6"] Dec 03 20:55:16 crc kubenswrapper[4765]: E1203 20:55:16.830236 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 03 20:55:16 crc kubenswrapper[4765]: E1203 20:55:16.830392 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9vfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tnnwc_openstack(5eaeb80c-f6b8-48bb-80a4-3f43623cfc13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 20:55:16 crc kubenswrapper[4765]: E1203 20:55:16.831763 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tnnwc" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.284845 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rq8cg"] Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.620051 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq8cg" event={"ID":"c617f0ad-a3ec-4407-a4bf-494c3a362a48","Type":"ContainerStarted","Data":"9f19e720b40348d1faa22d92bd1650cecf0e914329cb880f7dbb4bd47d3a9cb9"} Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.620473 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq8cg" event={"ID":"c617f0ad-a3ec-4407-a4bf-494c3a362a48","Type":"ContainerStarted","Data":"901b8f546fc0922e15ad7750abe32edb7ff9cd367d5828024cd23587b597a572"} Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.665230 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f5tgv" event={"ID":"b9150bf8-239d-4d51-bc11-81e118eb19f1","Type":"ContainerStarted","Data":"ea01e06e5aa1444e9af260d40003e11f7e2fadda9cc92556c1e97718ec053c77"} Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.669471 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerStarted","Data":"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3"} Dec 03 20:55:17 crc kubenswrapper[4765]: E1203 20:55:17.670251 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tnnwc" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.688094 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rq8cg" podStartSLOduration=16.688073484 podStartE2EDuration="16.688073484s" podCreationTimestamp="2025-12-03 20:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:17.67677157 +0000 UTC m=+1015.607316741" watchObservedRunningTime="2025-12-03 20:55:17.688073484 +0000 UTC m=+1015.618618635" Dec 03 20:55:17 crc kubenswrapper[4765]: I1203 20:55:17.731944 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f5tgv" podStartSLOduration=3.125376536 podStartE2EDuration="26.731924705s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="2025-12-03 20:54:53.199587334 +0000 UTC m=+991.130132485" lastFinishedPulling="2025-12-03 20:55:16.806135503 +0000 UTC m=+1014.736680654" observedRunningTime="2025-12-03 20:55:17.73066032 +0000 UTC m=+1015.661205471" watchObservedRunningTime="2025-12-03 20:55:17.731924705 +0000 UTC m=+1015.662469856" Dec 03 20:55:18 crc kubenswrapper[4765]: I1203 20:55:18.379554 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" path="/var/lib/kubelet/pods/f893cff2-cfbc-4c12-9781-c2d6a7f3905f/volumes" Dec 03 20:55:18 crc kubenswrapper[4765]: I1203 20:55:18.681701 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerStarted","Data":"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc"} Dec 03 20:55:18 crc kubenswrapper[4765]: I1203 20:55:18.683756 4765 generic.go:334] "Generic (PLEG): container finished" podID="11ecb436-7651-4a8f-a9b8-5f476df8161d" containerID="bbb4301f73f9741dd5626b5cf895e1ff79ce5b87f99e9d744fd78ee04f4370a2" exitCode=0 Dec 03 20:55:18 crc kubenswrapper[4765]: I1203 20:55:18.683930 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzkps" event={"ID":"11ecb436-7651-4a8f-a9b8-5f476df8161d","Type":"ContainerDied","Data":"bbb4301f73f9741dd5626b5cf895e1ff79ce5b87f99e9d744fd78ee04f4370a2"} Dec 03 20:55:19 crc kubenswrapper[4765]: I1203 20:55:19.695500 4765 generic.go:334] "Generic (PLEG): container finished" podID="b9150bf8-239d-4d51-bc11-81e118eb19f1" containerID="ea01e06e5aa1444e9af260d40003e11f7e2fadda9cc92556c1e97718ec053c77" exitCode=0 Dec 03 20:55:19 crc kubenswrapper[4765]: I1203 20:55:19.695584 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f5tgv" event={"ID":"b9150bf8-239d-4d51-bc11-81e118eb19f1","Type":"ContainerDied","Data":"ea01e06e5aa1444e9af260d40003e11f7e2fadda9cc92556c1e97718ec053c77"} Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.033906 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzkps" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.145319 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config\") pod \"11ecb436-7651-4a8f-a9b8-5f476df8161d\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.145393 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv266\" (UniqueName: \"kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266\") pod \"11ecb436-7651-4a8f-a9b8-5f476df8161d\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.145524 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle\") pod \"11ecb436-7651-4a8f-a9b8-5f476df8161d\" (UID: \"11ecb436-7651-4a8f-a9b8-5f476df8161d\") " Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.165195 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266" (OuterVolumeSpecName: "kube-api-access-cv266") pod "11ecb436-7651-4a8f-a9b8-5f476df8161d" (UID: "11ecb436-7651-4a8f-a9b8-5f476df8161d"). InnerVolumeSpecName "kube-api-access-cv266". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.173643 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config" (OuterVolumeSpecName: "config") pod "11ecb436-7651-4a8f-a9b8-5f476df8161d" (UID: "11ecb436-7651-4a8f-a9b8-5f476df8161d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.195664 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11ecb436-7651-4a8f-a9b8-5f476df8161d" (UID: "11ecb436-7651-4a8f-a9b8-5f476df8161d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.248120 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.248165 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv266\" (UniqueName: \"kubernetes.io/projected/11ecb436-7651-4a8f-a9b8-5f476df8161d-kube-api-access-cv266\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.248180 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ecb436-7651-4a8f-a9b8-5f476df8161d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.255593 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-tskk6" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.706648 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzkps" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.707212 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzkps" event={"ID":"11ecb436-7651-4a8f-a9b8-5f476df8161d","Type":"ContainerDied","Data":"b6a1ffa0d9282d9024dc2d2fd7e91b1fb17b46c57bc523c205ddf819d51fba50"} Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.707235 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a1ffa0d9282d9024dc2d2fd7e91b1fb17b46c57bc523c205ddf819d51fba50" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.917685 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:20 crc kubenswrapper[4765]: E1203 20:55:20.918357 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="init" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.918374 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="init" Dec 03 20:55:20 crc kubenswrapper[4765]: E1203 20:55:20.918391 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ecb436-7651-4a8f-a9b8-5f476df8161d" containerName="neutron-db-sync" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.918398 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ecb436-7651-4a8f-a9b8-5f476df8161d" containerName="neutron-db-sync" Dec 03 20:55:20 crc kubenswrapper[4765]: E1203 20:55:20.918408 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.918414 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.918575 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ecb436-7651-4a8f-a9b8-5f476df8161d" containerName="neutron-db-sync" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.918594 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f893cff2-cfbc-4c12-9781-c2d6a7f3905f" containerName="dnsmasq-dns" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.919434 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.975372 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.977111 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:20 crc kubenswrapper[4765]: I1203 20:55:20.985948 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.026320 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.026567 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.026686 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.026681 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nnc6n" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.063325 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.063384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlpj\" (UniqueName: \"kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.063445 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.063475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.063526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.125370 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164484 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164537 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7ld\" (UniqueName: \"kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164593 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlpj\" (UniqueName: \"kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164616 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164649 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164669 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164688 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164727 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.164758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.165752 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.166084 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.166440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.169557 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.193870 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlpj\" (UniqueName: \"kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj\") pod \"dnsmasq-dns-5f66db59b9-rd2vs\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.266050 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.266625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.266654 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7ld\" (UniqueName: \"kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.266685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.266726 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.270086 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.270282 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.275224 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.290154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7ld\" (UniqueName: \"kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.295731 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.298899 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config\") pod \"neutron-7655896996-bvmmg\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.331171 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.716833 4765 generic.go:334] "Generic (PLEG): container finished" podID="c617f0ad-a3ec-4407-a4bf-494c3a362a48" containerID="9f19e720b40348d1faa22d92bd1650cecf0e914329cb880f7dbb4bd47d3a9cb9" exitCode=0 Dec 03 20:55:21 crc kubenswrapper[4765]: I1203 20:55:21.716888 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq8cg" event={"ID":"c617f0ad-a3ec-4407-a4bf-494c3a362a48","Type":"ContainerDied","Data":"9f19e720b40348d1faa22d92bd1650cecf0e914329cb880f7dbb4bd47d3a9cb9"} Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.559045 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8458f9f649-c6lrl"] Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.561124 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.569600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8458f9f649-c6lrl"] Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.571731 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.571863 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723292 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-internal-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723381 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-combined-ca-bundle\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-httpd-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723465 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723493 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52wb\" (UniqueName: \"kubernetes.io/projected/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-kube-api-access-x52wb\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723543 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-ovndb-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.723622 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-public-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825410 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-public-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825482 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-internal-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825502 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-combined-ca-bundle\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-httpd-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825571 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825595 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x52wb\" (UniqueName: \"kubernetes.io/projected/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-kube-api-access-x52wb\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.825646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-ovndb-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.831749 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-ovndb-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.831794 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-httpd-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.831895 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-combined-ca-bundle\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.832189 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-public-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.832899 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-internal-tls-certs\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.833713 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-config\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.860881 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52wb\" (UniqueName: \"kubernetes.io/projected/1f78f95a-adb3-4939-a8f0-3fdd4d3757da-kube-api-access-x52wb\") pod \"neutron-8458f9f649-c6lrl\" (UID: \"1f78f95a-adb3-4939-a8f0-3fdd4d3757da\") " pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:23 crc kubenswrapper[4765]: I1203 20:55:23.944899 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:24 crc kubenswrapper[4765]: I1203 20:55:24.798516 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:55:24 crc kubenswrapper[4765]: I1203 20:55:24.798851 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.321910 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f5tgv" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.330708 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397442 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data\") pod \"b9150bf8-239d-4d51-bc11-81e118eb19f1\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397495 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts\") pod \"b9150bf8-239d-4d51-bc11-81e118eb19f1\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397517 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397551 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397600 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle\") pod \"b9150bf8-239d-4d51-bc11-81e118eb19f1\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkscz\" (UniqueName: \"kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz\") pod \"b9150bf8-239d-4d51-bc11-81e118eb19f1\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397721 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs\") pod \"b9150bf8-239d-4d51-bc11-81e118eb19f1\" (UID: \"b9150bf8-239d-4d51-bc11-81e118eb19f1\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397769 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cms4p\" (UniqueName: \"kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397833 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.397894 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle\") pod \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\" (UID: \"c617f0ad-a3ec-4407-a4bf-494c3a362a48\") " Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.398281 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs" (OuterVolumeSpecName: "logs") pod "b9150bf8-239d-4d51-bc11-81e118eb19f1" (UID: "b9150bf8-239d-4d51-bc11-81e118eb19f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.398522 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9150bf8-239d-4d51-bc11-81e118eb19f1-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.405194 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p" (OuterVolumeSpecName: "kube-api-access-cms4p") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "kube-api-access-cms4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.406706 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts" (OuterVolumeSpecName: "scripts") pod "b9150bf8-239d-4d51-bc11-81e118eb19f1" (UID: "b9150bf8-239d-4d51-bc11-81e118eb19f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.416168 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.417143 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts" (OuterVolumeSpecName: "scripts") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.417719 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz" (OuterVolumeSpecName: "kube-api-access-qkscz") pod "b9150bf8-239d-4d51-bc11-81e118eb19f1" (UID: "b9150bf8-239d-4d51-bc11-81e118eb19f1"). InnerVolumeSpecName "kube-api-access-qkscz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.419231 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.450379 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.455984 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data" (OuterVolumeSpecName: "config-data") pod "b9150bf8-239d-4d51-bc11-81e118eb19f1" (UID: "b9150bf8-239d-4d51-bc11-81e118eb19f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.464187 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data" (OuterVolumeSpecName: "config-data") pod "c617f0ad-a3ec-4407-a4bf-494c3a362a48" (UID: "c617f0ad-a3ec-4407-a4bf-494c3a362a48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.466040 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9150bf8-239d-4d51-bc11-81e118eb19f1" (UID: "b9150bf8-239d-4d51-bc11-81e118eb19f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499370 4765 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499720 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499732 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499740 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499749 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499759 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499769 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9150bf8-239d-4d51-bc11-81e118eb19f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499777 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkscz\" (UniqueName: \"kubernetes.io/projected/b9150bf8-239d-4d51-bc11-81e118eb19f1-kube-api-access-qkscz\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499785 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c617f0ad-a3ec-4407-a4bf-494c3a362a48-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.499793 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cms4p\" (UniqueName: \"kubernetes.io/projected/c617f0ad-a3ec-4407-a4bf-494c3a362a48-kube-api-access-cms4p\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.767622 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f5tgv" event={"ID":"b9150bf8-239d-4d51-bc11-81e118eb19f1","Type":"ContainerDied","Data":"684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9"} Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.767668 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="684c53341759511e4efcdcec59432185560833235aa2d61e436c379584bfddd9" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.767735 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f5tgv" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.782384 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerStarted","Data":"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5"} Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.785864 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7slsb" event={"ID":"2c3f8651-39c3-450e-9da1-06ad1dc357a7","Type":"ContainerStarted","Data":"c4163dafb664f6d6c108a64fc28d978da5dbf70b293ab5ca7cb7241f9f21c012"} Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.787385 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq8cg" event={"ID":"c617f0ad-a3ec-4407-a4bf-494c3a362a48","Type":"ContainerDied","Data":"901b8f546fc0922e15ad7750abe32edb7ff9cd367d5828024cd23587b597a572"} Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.787426 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901b8f546fc0922e15ad7750abe32edb7ff9cd367d5828024cd23587b597a572" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.787440 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq8cg" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.807406 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7slsb" podStartSLOduration=2.489368495 podStartE2EDuration="35.80739183s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="2025-12-03 20:54:53.164930751 +0000 UTC m=+991.095475902" lastFinishedPulling="2025-12-03 20:55:26.482954086 +0000 UTC m=+1024.413499237" observedRunningTime="2025-12-03 20:55:26.799987911 +0000 UTC m=+1024.730533062" watchObservedRunningTime="2025-12-03 20:55:26.80739183 +0000 UTC m=+1024.737936981" Dec 03 20:55:26 crc kubenswrapper[4765]: I1203 20:55:26.871774 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.079284 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8458f9f649-c6lrl"] Dec 03 20:55:27 crc kubenswrapper[4765]: W1203 20:55:27.119396 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f78f95a_adb3_4939_a8f0_3fdd4d3757da.slice/crio-3354a3303ba870b863f89fb95895c419cd9f206c5f0d733b25239f7444e526a0 WatchSource:0}: Error finding container 3354a3303ba870b863f89fb95895c419cd9f206c5f0d733b25239f7444e526a0: Status 404 returned error can't find the container with id 3354a3303ba870b863f89fb95895c419cd9f206c5f0d733b25239f7444e526a0 Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.165595 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.430421 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c98cc54bd-jknm9"] Dec 03 20:55:27 crc kubenswrapper[4765]: E1203 20:55:27.430762 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9150bf8-239d-4d51-bc11-81e118eb19f1" containerName="placement-db-sync" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.430777 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9150bf8-239d-4d51-bc11-81e118eb19f1" containerName="placement-db-sync" Dec 03 20:55:27 crc kubenswrapper[4765]: E1203 20:55:27.430796 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c617f0ad-a3ec-4407-a4bf-494c3a362a48" containerName="keystone-bootstrap" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.430802 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c617f0ad-a3ec-4407-a4bf-494c3a362a48" containerName="keystone-bootstrap" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.430969 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9150bf8-239d-4d51-bc11-81e118eb19f1" containerName="placement-db-sync" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.430989 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c617f0ad-a3ec-4407-a4bf-494c3a362a48" containerName="keystone-bootstrap" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.431783 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.435100 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.435155 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9ptdk" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.435242 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.435409 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.435487 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.480913 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c98cc54bd-jknm9"] Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.589899 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57c6f94f6-xmzln"] Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.591136 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.604937 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.605178 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bs5mp" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.605313 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.605433 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.605523 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.606220 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.620732 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e714435f-b27b-485e-82cb-4cd1f1491cac-logs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-scripts\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621218 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-internal-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621242 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-public-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621270 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-config-data\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621331 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzk8m\" (UniqueName: \"kubernetes.io/projected/e714435f-b27b-485e-82cb-4cd1f1491cac-kube-api-access-pzk8m\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.621378 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-combined-ca-bundle\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.623332 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57c6f94f6-xmzln"] Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-combined-ca-bundle\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-combined-ca-bundle\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-scripts\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdnj\" (UniqueName: \"kubernetes.io/projected/4e9b168b-07ea-4870-ba96-9680c4530133-kube-api-access-9wdnj\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722575 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e714435f-b27b-485e-82cb-4cd1f1491cac-logs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722603 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-scripts\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722630 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-internal-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-public-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722669 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-config-data\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-config-data\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722706 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-public-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722739 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-internal-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzk8m\" (UniqueName: \"kubernetes.io/projected/e714435f-b27b-485e-82cb-4cd1f1491cac-kube-api-access-pzk8m\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722774 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-fernet-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.722788 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-credential-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.724327 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e714435f-b27b-485e-82cb-4cd1f1491cac-logs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.728253 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-public-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.728267 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-config-data\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.728908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-scripts\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.728909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-combined-ca-bundle\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.729720 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e714435f-b27b-485e-82cb-4cd1f1491cac-internal-tls-certs\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.741238 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzk8m\" (UniqueName: \"kubernetes.io/projected/e714435f-b27b-485e-82cb-4cd1f1491cac-kube-api-access-pzk8m\") pod \"placement-5c98cc54bd-jknm9\" (UID: \"e714435f-b27b-485e-82cb-4cd1f1491cac\") " pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.797049 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.801654 4765 generic.go:334] "Generic (PLEG): container finished" podID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerID="62c351df5f9c919853733b66c8b2993c5f5140e98797c1fd8d68523c0b9f85e6" exitCode=0 Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.801715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" event={"ID":"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d","Type":"ContainerDied","Data":"62c351df5f9c919853733b66c8b2993c5f5140e98797c1fd8d68523c0b9f85e6"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.801741 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" event={"ID":"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d","Type":"ContainerStarted","Data":"f018a74a719565e7e6534c0376f73a0f193bb3d9c3c1a1c2de62b29a48fa054e"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.809851 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8458f9f649-c6lrl" event={"ID":"1f78f95a-adb3-4939-a8f0-3fdd4d3757da","Type":"ContainerStarted","Data":"a3387a159b5af7f140d4bd0adf8db0f95b4d89a2339f1b7a7dba772520d3b7bf"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.809929 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8458f9f649-c6lrl" event={"ID":"1f78f95a-adb3-4939-a8f0-3fdd4d3757da","Type":"ContainerStarted","Data":"3354a3303ba870b863f89fb95895c419cd9f206c5f0d733b25239f7444e526a0"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.819526 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerStarted","Data":"2cd89c5ed7d221a041e2723fd51a7215fa1ce2050e692016d0630e8132058798"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.819579 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerStarted","Data":"34241ccfc31ea9081d97e5d507d05ba6f2ca6aeb7552902b263a3673f4b096d7"} Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828510 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wdnj\" (UniqueName: \"kubernetes.io/projected/4e9b168b-07ea-4870-ba96-9680c4530133-kube-api-access-9wdnj\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828636 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-config-data\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828656 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-public-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828711 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-internal-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828730 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-fernet-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828747 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-credential-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828810 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-combined-ca-bundle\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.828832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-scripts\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.835261 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-credential-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.837139 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-scripts\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.842577 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-combined-ca-bundle\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.843460 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-fernet-keys\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.843537 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-public-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.843845 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-config-data\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.844483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9b168b-07ea-4870-ba96-9680c4530133-internal-tls-certs\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:27 crc kubenswrapper[4765]: I1203 20:55:27.853597 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wdnj\" (UniqueName: \"kubernetes.io/projected/4e9b168b-07ea-4870-ba96-9680c4530133-kube-api-access-9wdnj\") pod \"keystone-57c6f94f6-xmzln\" (UID: \"4e9b168b-07ea-4870-ba96-9680c4530133\") " pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.015717 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.296441 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c98cc54bd-jknm9"] Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.532600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57c6f94f6-xmzln"] Dec 03 20:55:28 crc kubenswrapper[4765]: W1203 20:55:28.543074 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e9b168b_07ea_4870_ba96_9680c4530133.slice/crio-07cee64c3d4caacaf96a0d22769e1acce7193e5ce0d9312a6bb5c74059e83e3c WatchSource:0}: Error finding container 07cee64c3d4caacaf96a0d22769e1acce7193e5ce0d9312a6bb5c74059e83e3c: Status 404 returned error can't find the container with id 07cee64c3d4caacaf96a0d22769e1acce7193e5ce0d9312a6bb5c74059e83e3c Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.830593 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57c6f94f6-xmzln" event={"ID":"4e9b168b-07ea-4870-ba96-9680c4530133","Type":"ContainerStarted","Data":"07cee64c3d4caacaf96a0d22769e1acce7193e5ce0d9312a6bb5c74059e83e3c"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.836889 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" event={"ID":"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d","Type":"ContainerStarted","Data":"39c041192db9e26f65cbfdd5839be11157490e4a90469cc5747d16e3513a285f"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.837043 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.845321 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerStarted","Data":"538030bcbc76a23791f1b760308be19bda72fd17241427b2eba81c0586719b41"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.845493 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.851096 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c98cc54bd-jknm9" event={"ID":"e714435f-b27b-485e-82cb-4cd1f1491cac","Type":"ContainerStarted","Data":"4bbfae718d8df2816d6005b9ae4b570866051b16065e6e2f82270b2920853d2f"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.851119 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c98cc54bd-jknm9" event={"ID":"e714435f-b27b-485e-82cb-4cd1f1491cac","Type":"ContainerStarted","Data":"70bc51db7b729b460699fd1cd53284b689254ea1076d7e40e445be87c6f21b56"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.854268 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" podStartSLOduration=8.854259049 podStartE2EDuration="8.854259049s" podCreationTimestamp="2025-12-03 20:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:28.852655985 +0000 UTC m=+1026.783201156" watchObservedRunningTime="2025-12-03 20:55:28.854259049 +0000 UTC m=+1026.784804210" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.856440 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8458f9f649-c6lrl" event={"ID":"1f78f95a-adb3-4939-a8f0-3fdd4d3757da","Type":"ContainerStarted","Data":"17beca51970b70cdce1f221e64a8609418cd0355690c1bc6c23ccbee1d211e92"} Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.856635 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:28 crc kubenswrapper[4765]: I1203 20:55:28.871166 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7655896996-bvmmg" podStartSLOduration=8.871149283 podStartE2EDuration="8.871149283s" podCreationTimestamp="2025-12-03 20:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:28.867983408 +0000 UTC m=+1026.798528559" watchObservedRunningTime="2025-12-03 20:55:28.871149283 +0000 UTC m=+1026.801694454" Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.866009 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57c6f94f6-xmzln" event={"ID":"4e9b168b-07ea-4870-ba96-9680c4530133","Type":"ContainerStarted","Data":"5ea7556c1ddc057af73dd6b05e9d0ef5267b280a7b81bc1ea1cfbc79c82ed740"} Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.866387 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.879493 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c98cc54bd-jknm9" event={"ID":"e714435f-b27b-485e-82cb-4cd1f1491cac","Type":"ContainerStarted","Data":"6b1c10af33c8826f534a1fe94d86127306cf2bc11031b41989c8226f0458613a"} Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.889558 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8458f9f649-c6lrl" podStartSLOduration=6.889536517 podStartE2EDuration="6.889536517s" podCreationTimestamp="2025-12-03 20:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:28.892505518 +0000 UTC m=+1026.823050679" watchObservedRunningTime="2025-12-03 20:55:29.889536517 +0000 UTC m=+1027.820081668" Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.908614 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c98cc54bd-jknm9" podStartSLOduration=2.908595939 podStartE2EDuration="2.908595939s" podCreationTimestamp="2025-12-03 20:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:29.907464409 +0000 UTC m=+1027.838009590" watchObservedRunningTime="2025-12-03 20:55:29.908595939 +0000 UTC m=+1027.839141090" Dec 03 20:55:29 crc kubenswrapper[4765]: I1203 20:55:29.916954 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57c6f94f6-xmzln" podStartSLOduration=2.916932494 podStartE2EDuration="2.916932494s" podCreationTimestamp="2025-12-03 20:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:29.890760569 +0000 UTC m=+1027.821305730" watchObservedRunningTime="2025-12-03 20:55:29.916932494 +0000 UTC m=+1027.847477645" Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.889819 4765 generic.go:334] "Generic (PLEG): container finished" podID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" containerID="c4163dafb664f6d6c108a64fc28d978da5dbf70b293ab5ca7cb7241f9f21c012" exitCode=0 Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.889963 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7slsb" event={"ID":"2c3f8651-39c3-450e-9da1-06ad1dc357a7","Type":"ContainerDied","Data":"c4163dafb664f6d6c108a64fc28d978da5dbf70b293ab5ca7cb7241f9f21c012"} Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.895572 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tnnwc" event={"ID":"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13","Type":"ContainerStarted","Data":"e586e0d7cbce1ab355b8717dfdf1800859ceb72540fc34dd4cb5b10433155232"} Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.895693 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.896004 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:30 crc kubenswrapper[4765]: I1203 20:55:30.929968 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tnnwc" podStartSLOduration=3.552623765 podStartE2EDuration="39.929947953s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="2025-12-03 20:54:52.972700636 +0000 UTC m=+990.903245787" lastFinishedPulling="2025-12-03 20:55:29.350024824 +0000 UTC m=+1027.280569975" observedRunningTime="2025-12-03 20:55:30.923845609 +0000 UTC m=+1028.854390770" watchObservedRunningTime="2025-12-03 20:55:30.929947953 +0000 UTC m=+1028.860493104" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.172510 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7slsb" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.328066 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle\") pod \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.328530 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bmkr\" (UniqueName: \"kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr\") pod \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.328636 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data\") pod \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\" (UID: \"2c3f8651-39c3-450e-9da1-06ad1dc357a7\") " Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.334627 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2c3f8651-39c3-450e-9da1-06ad1dc357a7" (UID: "2c3f8651-39c3-450e-9da1-06ad1dc357a7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.345899 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr" (OuterVolumeSpecName: "kube-api-access-5bmkr") pod "2c3f8651-39c3-450e-9da1-06ad1dc357a7" (UID: "2c3f8651-39c3-450e-9da1-06ad1dc357a7"). InnerVolumeSpecName "kube-api-access-5bmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.368014 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c3f8651-39c3-450e-9da1-06ad1dc357a7" (UID: "2c3f8651-39c3-450e-9da1-06ad1dc357a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.431462 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bmkr\" (UniqueName: \"kubernetes.io/projected/2c3f8651-39c3-450e-9da1-06ad1dc357a7-kube-api-access-5bmkr\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.431503 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.431515 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3f8651-39c3-450e-9da1-06ad1dc357a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.922011 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7slsb" event={"ID":"2c3f8651-39c3-450e-9da1-06ad1dc357a7","Type":"ContainerDied","Data":"fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542"} Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.922055 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe924c53d278db62fb4961fb047cb28d52600d52702f8aa5132e5ce5e4914542" Dec 03 20:55:33 crc kubenswrapper[4765]: I1203 20:55:33.922125 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7slsb" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.482131 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.482474 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="dnsmasq-dns" containerID="cri-o://39c041192db9e26f65cbfdd5839be11157490e4a90469cc5747d16e3513a285f" gracePeriod=10 Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.488450 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.503449 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fbbc67fdf-scz6t"] Dec 03 20:55:34 crc kubenswrapper[4765]: E1203 20:55:34.504248 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" containerName="barbican-db-sync" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.504270 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" containerName="barbican-db-sync" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.504529 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" containerName="barbican-db-sync" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.505621 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.513135 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7sk2h" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.513262 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.513313 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.531438 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65ffb6446-hdn74"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.532881 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.535815 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.539812 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fbbc67fdf-scz6t"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.551551 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ffb6446-hdn74"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.572624 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.574039 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.604215 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.653873 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0aad3bb-6dd6-4673-b738-2f04849106ce-logs\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.653925 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.653955 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654013 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data-custom\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654037 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-combined-ca-bundle\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654080 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-logs\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654116 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data-custom\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654159 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vz2b\" (UniqueName: \"kubernetes.io/projected/e0aad3bb-6dd6-4673-b738-2f04849106ce-kube-api-access-4vz2b\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654188 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-combined-ca-bundle\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.654207 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fhb\" (UniqueName: \"kubernetes.io/projected/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-kube-api-access-x6fhb\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.700377 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.702405 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.705115 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.720549 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758166 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0aad3bb-6dd6-4673-b738-2f04849106ce-logs\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758213 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgzh\" (UniqueName: \"kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758241 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758265 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758320 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data-custom\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-combined-ca-bundle\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758375 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758402 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-logs\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data-custom\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758445 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758465 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758491 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vz2b\" (UniqueName: \"kubernetes.io/projected/e0aad3bb-6dd6-4673-b738-2f04849106ce-kube-api-access-4vz2b\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758515 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-combined-ca-bundle\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.758533 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fhb\" (UniqueName: \"kubernetes.io/projected/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-kube-api-access-x6fhb\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.759180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0aad3bb-6dd6-4673-b738-2f04849106ce-logs\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.759725 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-logs\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.765985 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-combined-ca-bundle\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.767872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data-custom\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.767996 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-config-data\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.769423 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data-custom\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.774117 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-combined-ca-bundle\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.778292 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aad3bb-6dd6-4673-b738-2f04849106ce-config-data\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.778561 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fhb\" (UniqueName: \"kubernetes.io/projected/a90044b4-b1fd-4c11-bb40-b52bf1a912f8-kube-api-access-x6fhb\") pod \"barbican-keystone-listener-65ffb6446-hdn74\" (UID: \"a90044b4-b1fd-4c11-bb40-b52bf1a912f8\") " pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.788061 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vz2b\" (UniqueName: \"kubernetes.io/projected/e0aad3bb-6dd6-4673-b738-2f04849106ce-kube-api-access-4vz2b\") pod \"barbican-worker-5fbbc67fdf-scz6t\" (UID: \"e0aad3bb-6dd6-4673-b738-2f04849106ce\") " pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.851627 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgzh\" (UniqueName: \"kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866375 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866398 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866419 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866455 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866472 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866508 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.866526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrzw\" (UniqueName: \"kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.867243 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.873924 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.874121 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.874339 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.895431 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgzh\" (UniqueName: \"kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh\") pod \"dnsmasq-dns-869f779d85-fqlm7\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.922490 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.933747 4765 generic.go:334] "Generic (PLEG): container finished" podID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerID="39c041192db9e26f65cbfdd5839be11157490e4a90469cc5747d16e3513a285f" exitCode=0 Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.933788 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" event={"ID":"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d","Type":"ContainerDied","Data":"39c041192db9e26f65cbfdd5839be11157490e4a90469cc5747d16e3513a285f"} Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.946822 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969213 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969264 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrzw\" (UniqueName: \"kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969361 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969398 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969425 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.969834 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.972440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.973697 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.991848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrzw\" (UniqueName: \"kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:34 crc kubenswrapper[4765]: I1203 20:55:34.992410 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data\") pod \"barbican-api-7f54c7ffd6-9bcb8\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.033740 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.440754 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.579309 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc\") pod \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.579398 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb\") pod \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.579441 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb\") pod \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.579497 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config\") pod \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.579641 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxlpj\" (UniqueName: \"kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj\") pod \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\" (UID: \"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d\") " Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.592553 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj" (OuterVolumeSpecName: "kube-api-access-rxlpj") pod "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" (UID: "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d"). InnerVolumeSpecName "kube-api-access-rxlpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.639207 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" (UID: "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.639378 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" (UID: "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.644435 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config" (OuterVolumeSpecName: "config") pod "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" (UID: "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.658102 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" (UID: "512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.680985 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxlpj\" (UniqueName: \"kubernetes.io/projected/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-kube-api-access-rxlpj\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.681013 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.681022 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.681033 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.681041 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.791882 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.840994 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fbbc67fdf-scz6t"] Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.898964 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ffb6446-hdn74"] Dec 03 20:55:35 crc kubenswrapper[4765]: W1203 20:55:35.900116 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90044b4_b1fd_4c11_bb40_b52bf1a912f8.slice/crio-705183909c4573cb1fc8a464a19b1752e38d1830b6b00a23c44af6e472253e49 WatchSource:0}: Error finding container 705183909c4573cb1fc8a464a19b1752e38d1830b6b00a23c44af6e472253e49: Status 404 returned error can't find the container with id 705183909c4573cb1fc8a464a19b1752e38d1830b6b00a23c44af6e472253e49 Dec 03 20:55:35 crc kubenswrapper[4765]: W1203 20:55:35.901961 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7af524a7_e3d9_4c89_b98d_7a1e5ce35d76.slice/crio-478c73827fd66012034538ab79320794a3ab68f645e4866667ce59ded4eed4c9 WatchSource:0}: Error finding container 478c73827fd66012034538ab79320794a3ab68f645e4866667ce59ded4eed4c9: Status 404 returned error can't find the container with id 478c73827fd66012034538ab79320794a3ab68f645e4866667ce59ded4eed4c9 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.914085 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.955146 4765 generic.go:334] "Generic (PLEG): container finished" podID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" containerID="e586e0d7cbce1ab355b8717dfdf1800859ceb72540fc34dd4cb5b10433155232" exitCode=0 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.955259 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tnnwc" event={"ID":"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13","Type":"ContainerDied","Data":"e586e0d7cbce1ab355b8717dfdf1800859ceb72540fc34dd4cb5b10433155232"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.958394 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" event={"ID":"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76","Type":"ContainerStarted","Data":"478c73827fd66012034538ab79320794a3ab68f645e4866667ce59ded4eed4c9"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.959649 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" event={"ID":"e0aad3bb-6dd6-4673-b738-2f04849106ce","Type":"ContainerStarted","Data":"7a330ef2ade6f501625ceb86fb6aeff1d1b084346bea7b88f7bd7f76dd0351d8"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.960875 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerStarted","Data":"20b97520fc4457c0148564ddc92fa1f933032468f67880c220fb38855f4cd2bb"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.962056 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" event={"ID":"a90044b4-b1fd-4c11-bb40-b52bf1a912f8","Type":"ContainerStarted","Data":"705183909c4573cb1fc8a464a19b1752e38d1830b6b00a23c44af6e472253e49"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerStarted","Data":"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966358 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966379 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-central-agent" containerID="cri-o://eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3" gracePeriod=30 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966430 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-notification-agent" containerID="cri-o://798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc" gracePeriod=30 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966418 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="sg-core" containerID="cri-o://2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5" gracePeriod=30 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.966472 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="proxy-httpd" containerID="cri-o://74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c" gracePeriod=30 Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.978935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" event={"ID":"512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d","Type":"ContainerDied","Data":"f018a74a719565e7e6534c0376f73a0f193bb3d9c3c1a1c2de62b29a48fa054e"} Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.979016 4765 scope.go:117] "RemoveContainer" containerID="39c041192db9e26f65cbfdd5839be11157490e4a90469cc5747d16e3513a285f" Dec 03 20:55:35 crc kubenswrapper[4765]: I1203 20:55:35.979189 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rd2vs" Dec 03 20:55:36 crc kubenswrapper[4765]: I1203 20:55:36.020192 4765 scope.go:117] "RemoveContainer" containerID="62c351df5f9c919853733b66c8b2993c5f5140e98797c1fd8d68523c0b9f85e6" Dec 03 20:55:36 crc kubenswrapper[4765]: I1203 20:55:36.022173 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.578939287 podStartE2EDuration="45.022134366s" podCreationTimestamp="2025-12-03 20:54:51 +0000 UTC" firstStartedPulling="2025-12-03 20:54:52.824456677 +0000 UTC m=+990.755001828" lastFinishedPulling="2025-12-03 20:55:35.267651756 +0000 UTC m=+1033.198196907" observedRunningTime="2025-12-03 20:55:36.000394111 +0000 UTC m=+1033.930939282" watchObservedRunningTime="2025-12-03 20:55:36.022134366 +0000 UTC m=+1033.952679527" Dec 03 20:55:36 crc kubenswrapper[4765]: I1203 20:55:36.042284 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:36 crc kubenswrapper[4765]: I1203 20:55:36.051412 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rd2vs"] Dec 03 20:55:36 crc kubenswrapper[4765]: I1203 20:55:36.372664 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" path="/var/lib/kubelet/pods/512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d/volumes" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.002235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerStarted","Data":"41177e4693699fe138a28cb6750b3b755a342a20dba2da4e7c823e1fd5be47b0"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.003283 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.003316 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.003326 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerStarted","Data":"d0f91219eac5b57404c12df0b1786c1756304b3a689c5d6fe697db6497af85eb"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006589 4765 generic.go:334] "Generic (PLEG): container finished" podID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerID="74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c" exitCode=0 Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006621 4765 generic.go:334] "Generic (PLEG): container finished" podID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerID="2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5" exitCode=2 Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006630 4765 generic.go:334] "Generic (PLEG): container finished" podID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerID="eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3" exitCode=0 Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006676 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerDied","Data":"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006701 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerDied","Data":"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.006712 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerDied","Data":"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.009205 4765 generic.go:334] "Generic (PLEG): container finished" podID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerID="1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368" exitCode=0 Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.010026 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" event={"ID":"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76","Type":"ContainerDied","Data":"1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368"} Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.026929 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" podStartSLOduration=3.026910944 podStartE2EDuration="3.026910944s" podCreationTimestamp="2025-12-03 20:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:37.022544737 +0000 UTC m=+1034.953089888" watchObservedRunningTime="2025-12-03 20:55:37.026910944 +0000 UTC m=+1034.957456095" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.092939 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7455d9cf5d-nflkk"] Dec 03 20:55:37 crc kubenswrapper[4765]: E1203 20:55:37.093395 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="init" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.093417 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="init" Dec 03 20:55:37 crc kubenswrapper[4765]: E1203 20:55:37.093448 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="dnsmasq-dns" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.093458 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="dnsmasq-dns" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.093659 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="512f2fd6-e3dd-4fdf-bc0b-c9674b8b730d" containerName="dnsmasq-dns" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.094805 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.096744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.099236 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.114039 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7455d9cf5d-nflkk"] Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208077 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-combined-ca-bundle\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-public-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-internal-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208513 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data-custom\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208561 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff9g2\" (UniqueName: \"kubernetes.io/projected/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-kube-api-access-ff9g2\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208592 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.208622 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-logs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.309828 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff9g2\" (UniqueName: \"kubernetes.io/projected/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-kube-api-access-ff9g2\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.309886 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.309929 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-logs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.309991 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-combined-ca-bundle\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.310021 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-public-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.310072 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-internal-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.310104 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data-custom\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.310509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-logs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.319866 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data-custom\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.321224 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-combined-ca-bundle\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.321553 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-config-data\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.321591 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-public-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.322094 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-internal-tls-certs\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.328513 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff9g2\" (UniqueName: \"kubernetes.io/projected/f0226955-fe8e-4128-8c2e-66d0a79ee3ad-kube-api-access-ff9g2\") pod \"barbican-api-7455d9cf5d-nflkk\" (UID: \"f0226955-fe8e-4128-8c2e-66d0a79ee3ad\") " pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.416995 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.444740 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616177 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616446 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616531 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616609 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616634 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9vfv\" (UniqueName: \"kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.616703 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id\") pod \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\" (UID: \"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13\") " Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.617125 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.621838 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts" (OuterVolumeSpecName: "scripts") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.628966 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv" (OuterVolumeSpecName: "kube-api-access-n9vfv") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "kube-api-access-n9vfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.631219 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.672180 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.708226 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data" (OuterVolumeSpecName: "config-data") pod "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" (UID: "5eaeb80c-f6b8-48bb-80a4-3f43623cfc13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718152 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718183 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9vfv\" (UniqueName: \"kubernetes.io/projected/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-kube-api-access-n9vfv\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718197 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718208 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718216 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.718225 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:37 crc kubenswrapper[4765]: I1203 20:55:37.901329 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7455d9cf5d-nflkk"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.038837 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" event={"ID":"a90044b4-b1fd-4c11-bb40-b52bf1a912f8","Type":"ContainerStarted","Data":"fe8f4788bfcd616dca63356f7f8a847df59b3e0910612bf56e19325f721c3374"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.039106 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" event={"ID":"a90044b4-b1fd-4c11-bb40-b52bf1a912f8","Type":"ContainerStarted","Data":"87a466bf0ce4c3dbe208277d4089a31aa1665c8895d8251f5b849348361b06e8"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.043724 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7455d9cf5d-nflkk" event={"ID":"f0226955-fe8e-4128-8c2e-66d0a79ee3ad","Type":"ContainerStarted","Data":"aa19b04a3ace4a91d85f6f11e5a25047d97a4c383ce44c413f6d63520cfe913f"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.048371 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tnnwc" event={"ID":"5eaeb80c-f6b8-48bb-80a4-3f43623cfc13","Type":"ContainerDied","Data":"b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.048396 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b221442d39d31b72e55bc05967bbd10f891a56b97789c852274fe4384ac7a60a" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.048481 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tnnwc" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.063103 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65ffb6446-hdn74" podStartSLOduration=2.517102149 podStartE2EDuration="4.06308265s" podCreationTimestamp="2025-12-03 20:55:34 +0000 UTC" firstStartedPulling="2025-12-03 20:55:35.906579075 +0000 UTC m=+1033.837124226" lastFinishedPulling="2025-12-03 20:55:37.452559576 +0000 UTC m=+1035.383104727" observedRunningTime="2025-12-03 20:55:38.060721846 +0000 UTC m=+1035.991266997" watchObservedRunningTime="2025-12-03 20:55:38.06308265 +0000 UTC m=+1035.993627801" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.065122 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" event={"ID":"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76","Type":"ContainerStarted","Data":"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.070167 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" event={"ID":"e0aad3bb-6dd6-4673-b738-2f04849106ce","Type":"ContainerStarted","Data":"4f52a24c59813fcf10d922a3280103356ff861298f906e1084bba6d6b25e291c"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.070208 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" event={"ID":"e0aad3bb-6dd6-4673-b738-2f04849106ce","Type":"ContainerStarted","Data":"352a9333766e70ee5f2b3008d0c6e6a66d07a69eb79fffc1400b86bf197c0500"} Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.092712 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" podStartSLOduration=4.092692678 podStartE2EDuration="4.092692678s" podCreationTimestamp="2025-12-03 20:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:38.083626014 +0000 UTC m=+1036.014171195" watchObservedRunningTime="2025-12-03 20:55:38.092692678 +0000 UTC m=+1036.023237829" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.112715 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fbbc67fdf-scz6t" podStartSLOduration=2.478969454 podStartE2EDuration="4.112697077s" podCreationTimestamp="2025-12-03 20:55:34 +0000 UTC" firstStartedPulling="2025-12-03 20:55:35.819535122 +0000 UTC m=+1033.750080283" lastFinishedPulling="2025-12-03 20:55:37.453262765 +0000 UTC m=+1035.383807906" observedRunningTime="2025-12-03 20:55:38.107641331 +0000 UTC m=+1036.038186482" watchObservedRunningTime="2025-12-03 20:55:38.112697077 +0000 UTC m=+1036.043242218" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.305461 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:38 crc kubenswrapper[4765]: E1203 20:55:38.306014 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" containerName="cinder-db-sync" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.306028 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" containerName="cinder-db-sync" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.306197 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" containerName="cinder-db-sync" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.307140 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.309743 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.309786 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.310084 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.310285 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rwgq2" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.323117 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.384458 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.425788 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429178 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429270 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429353 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwff\" (UniqueName: \"kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429379 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.429447 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.438781 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.456379 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.490334 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.491671 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.495534 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.508110 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.530674 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.530715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlgv\" (UniqueName: \"kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531118 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531217 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531276 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531330 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rwff\" (UniqueName: \"kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.531702 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.532198 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.532323 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.532363 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.536256 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.537265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.539000 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.542927 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.556990 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rwff\" (UniqueName: \"kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff\") pod \"cinder-scheduler-0\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.627280 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634156 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634206 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj25d\" (UniqueName: \"kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634234 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634253 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634272 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxlgv\" (UniqueName: \"kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634288 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634347 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634366 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634381 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634449 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.634467 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.635248 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.635429 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.635691 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.635734 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.651029 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxlgv\" (UniqueName: \"kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv\") pod \"dnsmasq-dns-58db5546cc-fnw5n\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735613 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj25d\" (UniqueName: \"kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735666 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735688 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735708 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735728 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735746 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.735791 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.736133 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.736645 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.740658 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.741144 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.741291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.741606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.759220 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.770963 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj25d\" (UniqueName: \"kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d\") pod \"cinder-api-0\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " pod="openstack/cinder-api-0" Dec 03 20:55:38 crc kubenswrapper[4765]: I1203 20:55:38.813468 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.082571 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7455d9cf5d-nflkk" event={"ID":"f0226955-fe8e-4128-8c2e-66d0a79ee3ad","Type":"ContainerStarted","Data":"cbff06624b0ba1ab252e336ef809689893ad6d247b0de5cde84f0dfc91e91dbf"} Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.082946 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.082967 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7455d9cf5d-nflkk" event={"ID":"f0226955-fe8e-4128-8c2e-66d0a79ee3ad","Type":"ContainerStarted","Data":"f20fc6767d0dde87088f7eb453610619ba303e540956f55b07d7d03489d6225c"} Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.105695 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7455d9cf5d-nflkk" podStartSLOduration=2.105670167 podStartE2EDuration="2.105670167s" podCreationTimestamp="2025-12-03 20:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:39.098005531 +0000 UTC m=+1037.028550682" watchObservedRunningTime="2025-12-03 20:55:39.105670167 +0000 UTC m=+1037.036215318" Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.158482 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.365100 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:39 crc kubenswrapper[4765]: I1203 20:55:39.411873 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.095118 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerStarted","Data":"77776cee0ddb7c8de7be749f19ed8099fad5bc5fef9c7361d51c1601e2751c8a"} Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.099115 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerStarted","Data":"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73"} Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.099163 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerStarted","Data":"9212729f80065b72433e1878cb8506a5059f4e7f6b0294f744051c5773583f83"} Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.100931 4765 generic.go:334] "Generic (PLEG): container finished" podID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerID="228f2c3a0cf75f72e419fbb0e3f09f66f277cba59a9a2f77b7c8e1a7dc4a4890" exitCode=0 Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.100969 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" event={"ID":"42c8be96-b365-46a2-8069-f4ccb5c9fa77","Type":"ContainerDied","Data":"228f2c3a0cf75f72e419fbb0e3f09f66f277cba59a9a2f77b7c8e1a7dc4a4890"} Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.101018 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" event={"ID":"42c8be96-b365-46a2-8069-f4ccb5c9fa77","Type":"ContainerStarted","Data":"2250ef580fc6f98cabbe44a4c3af6a66237fec1f887de9da433363708fd0b0ec"} Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.101122 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="dnsmasq-dns" containerID="cri-o://053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b" gracePeriod=10 Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.101460 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.101635 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.670009 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.773112 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.816020 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsgzh\" (UniqueName: \"kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh\") pod \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.816085 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc\") pod \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.816133 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb\") pod \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.816156 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config\") pod \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.816172 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb\") pod \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\" (UID: \"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.834872 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh" (OuterVolumeSpecName: "kube-api-access-vsgzh") pod "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" (UID: "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76"). InnerVolumeSpecName "kube-api-access-vsgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.871258 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" (UID: "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.883757 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" (UID: "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.890815 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config" (OuterVolumeSpecName: "config") pod "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" (UID: "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.926507 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" (UID: "7af524a7-e3d9-4c89-b98d-7a1e5ce35d76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.929835 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrltt\" (UniqueName: \"kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.929905 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.929977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930003 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930143 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930205 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930288 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data\") pod \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\" (UID: \"58b2d3ec-4f0c-4186-8d4d-301ba578af34\") " Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930735 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsgzh\" (UniqueName: \"kubernetes.io/projected/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-kube-api-access-vsgzh\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930760 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930774 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930785 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.930798 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.931173 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.931785 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.934738 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt" (OuterVolumeSpecName: "kube-api-access-lrltt") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "kube-api-access-lrltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.935205 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts" (OuterVolumeSpecName: "scripts") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:40 crc kubenswrapper[4765]: I1203 20:55:40.971290 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.018020 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033374 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033406 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033419 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrltt\" (UniqueName: \"kubernetes.io/projected/58b2d3ec-4f0c-4186-8d4d-301ba578af34-kube-api-access-lrltt\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033427 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033435 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58b2d3ec-4f0c-4186-8d4d-301ba578af34-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.033445 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.041770 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data" (OuterVolumeSpecName: "config-data") pod "58b2d3ec-4f0c-4186-8d4d-301ba578af34" (UID: "58b2d3ec-4f0c-4186-8d4d-301ba578af34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.115387 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerStarted","Data":"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.115791 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.117835 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" event={"ID":"42c8be96-b365-46a2-8069-f4ccb5c9fa77","Type":"ContainerStarted","Data":"c36cdbbd761fdc829e33b578ada90aad9c24eec2d5e372e6f667abe0d4c8402c"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.118645 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.120515 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.120854 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerStarted","Data":"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.122543 4765 generic.go:334] "Generic (PLEG): container finished" podID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerID="798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc" exitCode=0 Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.122585 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerDied","Data":"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.122602 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58b2d3ec-4f0c-4186-8d4d-301ba578af34","Type":"ContainerDied","Data":"637f39035914d0edb7b7f2b7d556265af1b75ac88fd58aee1b48eee16d76ccc9"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.122618 4765 scope.go:117] "RemoveContainer" containerID="74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.122718 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.131669 4765 generic.go:334] "Generic (PLEG): container finished" podID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerID="053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b" exitCode=0 Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.132019 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" event={"ID":"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76","Type":"ContainerDied","Data":"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.132079 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" event={"ID":"7af524a7-e3d9-4c89-b98d-7a1e5ce35d76","Type":"ContainerDied","Data":"478c73827fd66012034538ab79320794a3ab68f645e4866667ce59ded4eed4c9"} Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.132151 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fqlm7" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.135467 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b2d3ec-4f0c-4186-8d4d-301ba578af34-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.136410 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.136389766 podStartE2EDuration="3.136389766s" podCreationTimestamp="2025-12-03 20:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:41.132593774 +0000 UTC m=+1039.063138945" watchObservedRunningTime="2025-12-03 20:55:41.136389766 +0000 UTC m=+1039.066934917" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.162502 4765 scope.go:117] "RemoveContainer" containerID="2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.166136 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" podStartSLOduration=3.166120217 podStartE2EDuration="3.166120217s" podCreationTimestamp="2025-12-03 20:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:41.163729653 +0000 UTC m=+1039.094274804" watchObservedRunningTime="2025-12-03 20:55:41.166120217 +0000 UTC m=+1039.096665368" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.186185 4765 scope.go:117] "RemoveContainer" containerID="798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.209359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.220050 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fqlm7"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.233231 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.235460 4765 scope.go:117] "RemoveContainer" containerID="eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.245511 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.254799 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255167 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="sg-core" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255183 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="sg-core" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255195 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="dnsmasq-dns" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255202 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="dnsmasq-dns" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255221 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="init" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255228 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="init" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255245 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="proxy-httpd" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255251 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="proxy-httpd" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255264 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-central-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255270 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-central-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.255285 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-notification-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255291 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-notification-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255475 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-central-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255487 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="ceilometer-notification-agent" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255498 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="proxy-httpd" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255505 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" containerName="dnsmasq-dns" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.255511 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" containerName="sg-core" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.257020 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.261290 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.261554 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.269771 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.278037 4765 scope.go:117] "RemoveContainer" containerID="74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.281476 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c\": container with ID starting with 74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c not found: ID does not exist" containerID="74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.281524 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c"} err="failed to get container status \"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c\": rpc error: code = NotFound desc = could not find container \"74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c\": container with ID starting with 74383538a1a12aa08369b772bd896731c2720080406a93ddaa67c0bce9a0ce3c not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.281553 4765 scope.go:117] "RemoveContainer" containerID="2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.282725 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5\": container with ID starting with 2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5 not found: ID does not exist" containerID="2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.282776 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5"} err="failed to get container status \"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5\": rpc error: code = NotFound desc = could not find container \"2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5\": container with ID starting with 2ff71d58babd25f58a9f988b8d1e553f6f79d203a7f8d4d5171ef3b4551b88f5 not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.282811 4765 scope.go:117] "RemoveContainer" containerID="798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.283519 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc\": container with ID starting with 798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc not found: ID does not exist" containerID="798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.283543 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc"} err="failed to get container status \"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc\": rpc error: code = NotFound desc = could not find container \"798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc\": container with ID starting with 798466bd5e538bb94961137d4e10af37b17d23fa801a4f17df85b35edbe1bbfc not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.283563 4765 scope.go:117] "RemoveContainer" containerID="eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.284647 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3\": container with ID starting with eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3 not found: ID does not exist" containerID="eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.284670 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3"} err="failed to get container status \"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3\": rpc error: code = NotFound desc = could not find container \"eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3\": container with ID starting with eebddafdbe06c528f9dbf1b63dcff85de5cc042cc578660aeed0fefac9cb2ff3 not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.284686 4765 scope.go:117] "RemoveContainer" containerID="053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.316222 4765 scope.go:117] "RemoveContainer" containerID="1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.356480 4765 scope.go:117] "RemoveContainer" containerID="053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.361376 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b\": container with ID starting with 053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b not found: ID does not exist" containerID="053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.361424 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b"} err="failed to get container status \"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b\": rpc error: code = NotFound desc = could not find container \"053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b\": container with ID starting with 053034bed4333887224c20264a1a82d41b526749323e63320684e00167fa5d5b not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.361455 4765 scope.go:117] "RemoveContainer" containerID="1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368" Dec 03 20:55:41 crc kubenswrapper[4765]: E1203 20:55:41.364060 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368\": container with ID starting with 1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368 not found: ID does not exist" containerID="1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.364088 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368"} err="failed to get container status \"1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368\": rpc error: code = NotFound desc = could not find container \"1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368\": container with ID starting with 1eade2f64b7adb4372312ad827e6458f07afd3d8687092965d12f7d6a22f8368 not found: ID does not exist" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439588 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439616 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439688 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439713 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439781 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.439813 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whvp\" (UniqueName: \"kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.541505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.541585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.541610 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.542346 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.542420 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.542438 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.542742 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.542774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whvp\" (UniqueName: \"kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.543352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.547700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.547997 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.548117 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.551417 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.566968 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whvp\" (UniqueName: \"kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp\") pod \"ceilometer-0\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " pod="openstack/ceilometer-0" Dec 03 20:55:41 crc kubenswrapper[4765]: I1203 20:55:41.574910 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.119255 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.150034 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerStarted","Data":"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418"} Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.153243 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerStarted","Data":"93fedbba1de8627b7cf8d4745d3f85f7ad97d2b60247c28690b37cab2aac9688"} Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.183659 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.607787091 podStartE2EDuration="4.183637539s" podCreationTimestamp="2025-12-03 20:55:38 +0000 UTC" firstStartedPulling="2025-12-03 20:55:39.165655765 +0000 UTC m=+1037.096200916" lastFinishedPulling="2025-12-03 20:55:39.741506213 +0000 UTC m=+1037.672051364" observedRunningTime="2025-12-03 20:55:42.171815251 +0000 UTC m=+1040.102360402" watchObservedRunningTime="2025-12-03 20:55:42.183637539 +0000 UTC m=+1040.114182690" Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.383694 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b2d3ec-4f0c-4186-8d4d-301ba578af34" path="/var/lib/kubelet/pods/58b2d3ec-4f0c-4186-8d4d-301ba578af34/volumes" Dec 03 20:55:42 crc kubenswrapper[4765]: I1203 20:55:42.384792 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af524a7-e3d9-4c89-b98d-7a1e5ce35d76" path="/var/lib/kubelet/pods/7af524a7-e3d9-4c89-b98d-7a1e5ce35d76/volumes" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.167440 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerStarted","Data":"9f232c9fe5d940e6d58a46cdafd4b8e3fe6e94038c790de89e723477397d9b38"} Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.168010 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api-log" containerID="cri-o://dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" gracePeriod=30 Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.168076 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api" containerID="cri-o://1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" gracePeriod=30 Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.627796 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.715138 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900074 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900157 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900268 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900385 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900426 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900473 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj25d\" (UniqueName: \"kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900583 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data\") pod \"a0c9bb34-73f0-498b-a748-8aea881db3fa\" (UID: \"a0c9bb34-73f0-498b-a748-8aea881db3fa\") " Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.900857 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.901331 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0c9bb34-73f0-498b-a748-8aea881db3fa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.901800 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs" (OuterVolumeSpecName: "logs") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.906261 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.908022 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts" (OuterVolumeSpecName: "scripts") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.908478 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d" (OuterVolumeSpecName: "kube-api-access-wj25d") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "kube-api-access-wj25d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.948404 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:43 crc kubenswrapper[4765]: I1203 20:55:43.977111 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data" (OuterVolumeSpecName: "config-data") pod "a0c9bb34-73f0-498b-a748-8aea881db3fa" (UID: "a0c9bb34-73f0-498b-a748-8aea881db3fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004354 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004415 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0c9bb34-73f0-498b-a748-8aea881db3fa-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004448 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004472 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004489 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c9bb34-73f0-498b-a748-8aea881db3fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.004507 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj25d\" (UniqueName: \"kubernetes.io/projected/a0c9bb34-73f0-498b-a748-8aea881db3fa-kube-api-access-wj25d\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.179145 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerStarted","Data":"91b493be6364a8679a934ac275d10acc058851fc1a50ebac7fece811a6f7280a"} Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182080 4765 generic.go:334] "Generic (PLEG): container finished" podID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerID="1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" exitCode=0 Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182121 4765 generic.go:334] "Generic (PLEG): container finished" podID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerID="dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" exitCode=143 Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182173 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerDied","Data":"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd"} Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182345 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerDied","Data":"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73"} Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182367 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a0c9bb34-73f0-498b-a748-8aea881db3fa","Type":"ContainerDied","Data":"9212729f80065b72433e1878cb8506a5059f4e7f6b0294f744051c5773583f83"} Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.182392 4765 scope.go:117] "RemoveContainer" containerID="1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.234561 4765 scope.go:117] "RemoveContainer" containerID="dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.247576 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.303687 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.315464 4765 scope.go:117] "RemoveContainer" containerID="1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" Dec 03 20:55:44 crc kubenswrapper[4765]: E1203 20:55:44.315840 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd\": container with ID starting with 1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd not found: ID does not exist" containerID="1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.315869 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd"} err="failed to get container status \"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd\": rpc error: code = NotFound desc = could not find container \"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd\": container with ID starting with 1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd not found: ID does not exist" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.315892 4765 scope.go:117] "RemoveContainer" containerID="dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" Dec 03 20:55:44 crc kubenswrapper[4765]: E1203 20:55:44.316079 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73\": container with ID starting with dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73 not found: ID does not exist" containerID="dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.316102 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73"} err="failed to get container status \"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73\": rpc error: code = NotFound desc = could not find container \"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73\": container with ID starting with dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73 not found: ID does not exist" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.316114 4765 scope.go:117] "RemoveContainer" containerID="1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.331549 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd"} err="failed to get container status \"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd\": rpc error: code = NotFound desc = could not find container \"1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd\": container with ID starting with 1337ccd0e0835dba8930d936978ed72d0fa09e0451d11c3b88209bb0dd7985cd not found: ID does not exist" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.331612 4765 scope.go:117] "RemoveContainer" containerID="dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.343475 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73"} err="failed to get container status \"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73\": rpc error: code = NotFound desc = could not find container \"dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73\": container with ID starting with dba31881f866c9cb7545ffc96ef75485cc884e4368fa3ce00b35ff5d5f7f3a73 not found: ID does not exist" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.346779 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:44 crc kubenswrapper[4765]: E1203 20:55:44.347277 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.347318 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api" Dec 03 20:55:44 crc kubenswrapper[4765]: E1203 20:55:44.347342 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api-log" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.347352 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api-log" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.347578 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.347608 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" containerName="cinder-api-log" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.348873 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.352829 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.352868 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.353156 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.356458 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.371989 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c9bb34-73f0-498b-a748-8aea881db3fa" path="/var/lib/kubelet/pods/a0c9bb34-73f0-498b-a748-8aea881db3fa/volumes" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511170 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511211 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511233 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511266 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5v66\" (UniqueName: \"kubernetes.io/projected/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-kube-api-access-f5v66\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511327 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-scripts\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511347 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511408 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.511445 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-logs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612627 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-logs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612707 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612778 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5v66\" (UniqueName: \"kubernetes.io/projected/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-kube-api-access-f5v66\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-scripts\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612861 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.612898 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.614265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.615013 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-logs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.618925 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.623288 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data-custom\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.624958 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.625477 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.626107 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-scripts\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.632379 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-config-data\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.641440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5v66\" (UniqueName: \"kubernetes.io/projected/b340b625-0c86-49d6-8e7f-2bbfa3ab71d7-kube-api-access-f5v66\") pod \"cinder-api-0\" (UID: \"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7\") " pod="openstack/cinder-api-0" Dec 03 20:55:44 crc kubenswrapper[4765]: I1203 20:55:44.680310 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 20:55:45 crc kubenswrapper[4765]: I1203 20:55:45.191639 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 20:55:45 crc kubenswrapper[4765]: W1203 20:55:45.193170 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb340b625_0c86_49d6_8e7f_2bbfa3ab71d7.slice/crio-dc7b981b8b8a76e6e722c49fc740ac5a60d90222a397e95781f5df72447d870b WatchSource:0}: Error finding container dc7b981b8b8a76e6e722c49fc740ac5a60d90222a397e95781f5df72447d870b: Status 404 returned error can't find the container with id dc7b981b8b8a76e6e722c49fc740ac5a60d90222a397e95781f5df72447d870b Dec 03 20:55:45 crc kubenswrapper[4765]: I1203 20:55:45.195488 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerStarted","Data":"843e4d6f431325a58714c2dce76d78766e3277f8a48aed80ccc5ee86c6550c5f"} Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.207755 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerStarted","Data":"a14b94a66afb0869ac3066a5260145efb439f9a646f9c07cc75950be2f8f4c18"} Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.208467 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.209705 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7","Type":"ContainerStarted","Data":"59d0705967306c709b285103d34de6aecb5650be2c6af84b0e1171cb8a78763a"} Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.209749 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7","Type":"ContainerStarted","Data":"dc7b981b8b8a76e6e722c49fc740ac5a60d90222a397e95781f5df72447d870b"} Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.236777 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.801979535 podStartE2EDuration="5.236754202s" podCreationTimestamp="2025-12-03 20:55:41 +0000 UTC" firstStartedPulling="2025-12-03 20:55:42.130202279 +0000 UTC m=+1040.060747441" lastFinishedPulling="2025-12-03 20:55:45.564976957 +0000 UTC m=+1043.495522108" observedRunningTime="2025-12-03 20:55:46.226608878 +0000 UTC m=+1044.157154049" watchObservedRunningTime="2025-12-03 20:55:46.236754202 +0000 UTC m=+1044.167299373" Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.442405 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:46 crc kubenswrapper[4765]: I1203 20:55:46.469733 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:47 crc kubenswrapper[4765]: I1203 20:55:47.236272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b340b625-0c86-49d6-8e7f-2bbfa3ab71d7","Type":"ContainerStarted","Data":"6190b2153529084a9be9a001205b908dba537df9fd356a0df7fca0c2e79434df"} Dec 03 20:55:47 crc kubenswrapper[4765]: I1203 20:55:47.236773 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 20:55:47 crc kubenswrapper[4765]: I1203 20:55:47.266857 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.266840993 podStartE2EDuration="3.266840993s" podCreationTimestamp="2025-12-03 20:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:47.264275444 +0000 UTC m=+1045.194820625" watchObservedRunningTime="2025-12-03 20:55:47.266840993 +0000 UTC m=+1045.197386144" Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.760535 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.842020 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.842369 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="dnsmasq-dns" containerID="cri-o://aeed2072533078893562fd735c2c3ab0b73b847e828290bb733eaa65557d7d91" gracePeriod=10 Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.880124 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.880416 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.924535 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:48 crc kubenswrapper[4765]: I1203 20:55:48.936432 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7455d9cf5d-nflkk" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.005912 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.006134 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api-log" containerID="cri-o://d0f91219eac5b57404c12df0b1786c1756304b3a689c5d6fe697db6497af85eb" gracePeriod=30 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.006242 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api" containerID="cri-o://41177e4693699fe138a28cb6750b3b755a342a20dba2da4e7c823e1fd5be47b0" gracePeriod=30 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.276516 4765 generic.go:334] "Generic (PLEG): container finished" podID="8379126d-cb93-4700-81d6-393779d0a726" containerID="aeed2072533078893562fd735c2c3ab0b73b847e828290bb733eaa65557d7d91" exitCode=0 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.276586 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" event={"ID":"8379126d-cb93-4700-81d6-393779d0a726","Type":"ContainerDied","Data":"aeed2072533078893562fd735c2c3ab0b73b847e828290bb733eaa65557d7d91"} Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.279211 4765 generic.go:334] "Generic (PLEG): container finished" podID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerID="d0f91219eac5b57404c12df0b1786c1756304b3a689c5d6fe697db6497af85eb" exitCode=143 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.279287 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerDied","Data":"d0f91219eac5b57404c12df0b1786c1756304b3a689c5d6fe697db6497af85eb"} Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.279416 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="cinder-scheduler" containerID="cri-o://a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f" gracePeriod=30 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.280428 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="probe" containerID="cri-o://60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418" gracePeriod=30 Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.496593 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.530571 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config\") pod \"8379126d-cb93-4700-81d6-393779d0a726\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.530666 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb\") pod \"8379126d-cb93-4700-81d6-393779d0a726\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.530707 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb\") pod \"8379126d-cb93-4700-81d6-393779d0a726\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.530793 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc\") pod \"8379126d-cb93-4700-81d6-393779d0a726\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.530888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j77hg\" (UniqueName: \"kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg\") pod \"8379126d-cb93-4700-81d6-393779d0a726\" (UID: \"8379126d-cb93-4700-81d6-393779d0a726\") " Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.537134 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg" (OuterVolumeSpecName: "kube-api-access-j77hg") pod "8379126d-cb93-4700-81d6-393779d0a726" (UID: "8379126d-cb93-4700-81d6-393779d0a726"). InnerVolumeSpecName "kube-api-access-j77hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.602342 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8379126d-cb93-4700-81d6-393779d0a726" (UID: "8379126d-cb93-4700-81d6-393779d0a726"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.617770 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config" (OuterVolumeSpecName: "config") pod "8379126d-cb93-4700-81d6-393779d0a726" (UID: "8379126d-cb93-4700-81d6-393779d0a726"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.621145 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8379126d-cb93-4700-81d6-393779d0a726" (UID: "8379126d-cb93-4700-81d6-393779d0a726"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.627783 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8379126d-cb93-4700-81d6-393779d0a726" (UID: "8379126d-cb93-4700-81d6-393779d0a726"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.638906 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.639211 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.639222 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.639234 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8379126d-cb93-4700-81d6-393779d0a726-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:49 crc kubenswrapper[4765]: I1203 20:55:49.639249 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j77hg\" (UniqueName: \"kubernetes.io/projected/8379126d-cb93-4700-81d6-393779d0a726-kube-api-access-j77hg\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.289532 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" event={"ID":"8379126d-cb93-4700-81d6-393779d0a726","Type":"ContainerDied","Data":"4b1cd026869288ef762e9311fea98a1d47cfaffbdea7e15640fdeb81bb28deb4"} Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.289592 4765 scope.go:117] "RemoveContainer" containerID="aeed2072533078893562fd735c2c3ab0b73b847e828290bb733eaa65557d7d91" Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.289748 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm" Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.313379 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerID="60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418" exitCode=0 Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.313418 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerDied","Data":"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418"} Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.355900 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.380844 4765 scope.go:117] "RemoveContainer" containerID="a2aacef8a909715f1120de7e42e9ba10154f682fa41ddb9ddb83536eab103d36" Dec 03 20:55:50 crc kubenswrapper[4765]: I1203 20:55:50.388946 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-kbnbm"] Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.344949 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.803793 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.877558 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.877867 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rwff\" (UniqueName: \"kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.877689 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.877896 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.878037 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.878061 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.878148 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts\") pod \"0c78f582-959a-4e52-9ccb-dbdc077b19de\" (UID: \"0c78f582-959a-4e52-9ccb-dbdc077b19de\") " Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.878639 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c78f582-959a-4e52-9ccb-dbdc077b19de-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.882948 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.883868 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts" (OuterVolumeSpecName: "scripts") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.884569 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff" (OuterVolumeSpecName: "kube-api-access-5rwff") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "kube-api-access-5rwff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.922173 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.964618 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data" (OuterVolumeSpecName: "config-data") pod "0c78f582-959a-4e52-9ccb-dbdc077b19de" (UID: "0c78f582-959a-4e52-9ccb-dbdc077b19de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.980463 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.980514 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rwff\" (UniqueName: \"kubernetes.io/projected/0c78f582-959a-4e52-9ccb-dbdc077b19de-kube-api-access-5rwff\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.980533 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.980545 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:51 crc kubenswrapper[4765]: I1203 20:55:51.980557 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c78f582-959a-4e52-9ccb-dbdc077b19de-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.152863 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:56042->10.217.0.147:9311: read: connection reset by peer" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.152903 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:56030->10.217.0.147:9311: read: connection reset by peer" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.360976 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerID="a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f" exitCode=0 Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.363360 4765 generic.go:334] "Generic (PLEG): container finished" podID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerID="41177e4693699fe138a28cb6750b3b755a342a20dba2da4e7c823e1fd5be47b0" exitCode=0 Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.372113 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.373327 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8379126d-cb93-4700-81d6-393779d0a726" path="/var/lib/kubelet/pods/8379126d-cb93-4700-81d6-393779d0a726/volumes" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.374549 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerDied","Data":"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f"} Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.374598 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0c78f582-959a-4e52-9ccb-dbdc077b19de","Type":"ContainerDied","Data":"77776cee0ddb7c8de7be749f19ed8099fad5bc5fef9c7361d51c1601e2751c8a"} Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.374621 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerDied","Data":"41177e4693699fe138a28cb6750b3b755a342a20dba2da4e7c823e1fd5be47b0"} Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.374652 4765 scope.go:117] "RemoveContainer" containerID="60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.432069 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.455392 4765 scope.go:117] "RemoveContainer" containerID="a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.458599 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.472546 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.473108 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="cinder-scheduler" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473197 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="cinder-scheduler" Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.473252 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="probe" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473309 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="probe" Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.473366 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="dnsmasq-dns" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473414 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="dnsmasq-dns" Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.473485 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="init" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473532 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="init" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473737 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="cinder-scheduler" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473797 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" containerName="probe" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.473849 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8379126d-cb93-4700-81d6-393779d0a726" containerName="dnsmasq-dns" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.474799 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.480048 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.482692 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.489905 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.489985 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.490013 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5cc\" (UniqueName: \"kubernetes.io/projected/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-kube-api-access-9t5cc\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.490059 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.490111 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.490142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.499962 4765 scope.go:117] "RemoveContainer" containerID="60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418" Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.510280 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418\": container with ID starting with 60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418 not found: ID does not exist" containerID="60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.510344 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418"} err="failed to get container status \"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418\": rpc error: code = NotFound desc = could not find container \"60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418\": container with ID starting with 60095e31d91156d2ac37cc0b354c0e2333faee007232bb646d0ca14b29353418 not found: ID does not exist" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.510375 4765 scope.go:117] "RemoveContainer" containerID="a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f" Dec 03 20:55:52 crc kubenswrapper[4765]: E1203 20:55:52.510807 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f\": container with ID starting with a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f not found: ID does not exist" containerID="a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.510831 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f"} err="failed to get container status \"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f\": rpc error: code = NotFound desc = could not find container \"a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f\": container with ID starting with a0363a1ecbffb22d02b77f00ad0f1aa13724fad31a2feb414e06ad3c7bef601f not found: ID does not exist" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.591922 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592015 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592043 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5cc\" (UniqueName: \"kubernetes.io/projected/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-kube-api-access-9t5cc\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592151 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592182 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.592434 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.596333 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.596862 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.597151 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.597563 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.614534 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5cc\" (UniqueName: \"kubernetes.io/projected/bd57395a-abd0-4768-b1e9-0cdf5a9930d3-kube-api-access-9t5cc\") pod \"cinder-scheduler-0\" (UID: \"bd57395a-abd0-4768-b1e9-0cdf5a9930d3\") " pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.688776 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.790558 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.793929 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data\") pod \"14809fec-dba4-4a1c-a145-7432194fe3cf\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.794050 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djrzw\" (UniqueName: \"kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw\") pod \"14809fec-dba4-4a1c-a145-7432194fe3cf\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.794085 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle\") pod \"14809fec-dba4-4a1c-a145-7432194fe3cf\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.794152 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs\") pod \"14809fec-dba4-4a1c-a145-7432194fe3cf\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.794246 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom\") pod \"14809fec-dba4-4a1c-a145-7432194fe3cf\" (UID: \"14809fec-dba4-4a1c-a145-7432194fe3cf\") " Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.794894 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs" (OuterVolumeSpecName: "logs") pod "14809fec-dba4-4a1c-a145-7432194fe3cf" (UID: "14809fec-dba4-4a1c-a145-7432194fe3cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.798518 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw" (OuterVolumeSpecName: "kube-api-access-djrzw") pod "14809fec-dba4-4a1c-a145-7432194fe3cf" (UID: "14809fec-dba4-4a1c-a145-7432194fe3cf"). InnerVolumeSpecName "kube-api-access-djrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.798580 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "14809fec-dba4-4a1c-a145-7432194fe3cf" (UID: "14809fec-dba4-4a1c-a145-7432194fe3cf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.834556 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14809fec-dba4-4a1c-a145-7432194fe3cf" (UID: "14809fec-dba4-4a1c-a145-7432194fe3cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.849798 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data" (OuterVolumeSpecName: "config-data") pod "14809fec-dba4-4a1c-a145-7432194fe3cf" (UID: "14809fec-dba4-4a1c-a145-7432194fe3cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.895903 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djrzw\" (UniqueName: \"kubernetes.io/projected/14809fec-dba4-4a1c-a145-7432194fe3cf-kube-api-access-djrzw\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.895934 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.895949 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14809fec-dba4-4a1c-a145-7432194fe3cf-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.895962 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:52 crc kubenswrapper[4765]: I1203 20:55:52.895974 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14809fec-dba4-4a1c-a145-7432194fe3cf-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.265380 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 20:55:53 crc kubenswrapper[4765]: W1203 20:55:53.275044 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd57395a_abd0_4768_b1e9_0cdf5a9930d3.slice/crio-4797685040ff595eabe8b23eb0d6c9c9dda2a1489ad15aea91ee113081ca8286 WatchSource:0}: Error finding container 4797685040ff595eabe8b23eb0d6c9c9dda2a1489ad15aea91ee113081ca8286: Status 404 returned error can't find the container with id 4797685040ff595eabe8b23eb0d6c9c9dda2a1489ad15aea91ee113081ca8286 Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.374857 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd57395a-abd0-4768-b1e9-0cdf5a9930d3","Type":"ContainerStarted","Data":"4797685040ff595eabe8b23eb0d6c9c9dda2a1489ad15aea91ee113081ca8286"} Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.377356 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.377359 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f54c7ffd6-9bcb8" event={"ID":"14809fec-dba4-4a1c-a145-7432194fe3cf","Type":"ContainerDied","Data":"20b97520fc4457c0148564ddc92fa1f933032468f67880c220fb38855f4cd2bb"} Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.377484 4765 scope.go:117] "RemoveContainer" containerID="41177e4693699fe138a28cb6750b3b755a342a20dba2da4e7c823e1fd5be47b0" Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.421591 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.422499 4765 scope.go:117] "RemoveContainer" containerID="d0f91219eac5b57404c12df0b1786c1756304b3a689c5d6fe697db6497af85eb" Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.432479 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f54c7ffd6-9bcb8"] Dec 03 20:55:53 crc kubenswrapper[4765]: I1203 20:55:53.961961 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8458f9f649-c6lrl" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.030029 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.033512 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7655896996-bvmmg" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-api" containerID="cri-o://2cd89c5ed7d221a041e2723fd51a7215fa1ce2050e692016d0630e8132058798" gracePeriod=30 Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.033877 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7655896996-bvmmg" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-httpd" containerID="cri-o://538030bcbc76a23791f1b760308be19bda72fd17241427b2eba81c0586719b41" gracePeriod=30 Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.371594 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c78f582-959a-4e52-9ccb-dbdc077b19de" path="/var/lib/kubelet/pods/0c78f582-959a-4e52-9ccb-dbdc077b19de/volumes" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.373605 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" path="/var/lib/kubelet/pods/14809fec-dba4-4a1c-a145-7432194fe3cf/volumes" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.397140 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd57395a-abd0-4768-b1e9-0cdf5a9930d3","Type":"ContainerStarted","Data":"35aa8c9ca2398409d0c4667bab262b81c364079c15c7c8d0bb8baa91a5a646a5"} Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.400152 4765 generic.go:334] "Generic (PLEG): container finished" podID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerID="538030bcbc76a23791f1b760308be19bda72fd17241427b2eba81c0586719b41" exitCode=0 Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.400187 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerDied","Data":"538030bcbc76a23791f1b760308be19bda72fd17241427b2eba81c0586719b41"} Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.798329 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.798388 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.798439 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.799100 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:55:54 crc kubenswrapper[4765]: I1203 20:55:54.799162 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93" gracePeriod=600 Dec 03 20:55:55 crc kubenswrapper[4765]: I1203 20:55:55.409008 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd57395a-abd0-4768-b1e9-0cdf5a9930d3","Type":"ContainerStarted","Data":"0a91812bff2e7ae250665cf49320724dd81c0b24cc626749d660e0f5838b78e5"} Dec 03 20:55:55 crc kubenswrapper[4765]: I1203 20:55:55.411517 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93" exitCode=0 Dec 03 20:55:55 crc kubenswrapper[4765]: I1203 20:55:55.411559 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93"} Dec 03 20:55:55 crc kubenswrapper[4765]: I1203 20:55:55.411637 4765 scope.go:117] "RemoveContainer" containerID="3ba36381a71f6d06b4b5aa7cb8542b9c71a3ce01cc92c054d25575f73f145c33" Dec 03 20:55:55 crc kubenswrapper[4765]: I1203 20:55:55.455572 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.45552695 podStartE2EDuration="3.45552695s" podCreationTimestamp="2025-12-03 20:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:55:55.451622154 +0000 UTC m=+1053.382167315" watchObservedRunningTime="2025-12-03 20:55:55.45552695 +0000 UTC m=+1053.386072111" Dec 03 20:55:56 crc kubenswrapper[4765]: I1203 20:55:56.430794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1"} Dec 03 20:55:56 crc kubenswrapper[4765]: I1203 20:55:56.575887 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.443155 4765 generic.go:334] "Generic (PLEG): container finished" podID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerID="2cd89c5ed7d221a041e2723fd51a7215fa1ce2050e692016d0630e8132058798" exitCode=0 Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.444456 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerDied","Data":"2cd89c5ed7d221a041e2723fd51a7215fa1ce2050e692016d0630e8132058798"} Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.444945 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7655896996-bvmmg" event={"ID":"d7b131b1-8d60-4fa4-bb58-aca271fe6524","Type":"ContainerDied","Data":"34241ccfc31ea9081d97e5d507d05ba6f2ca6aeb7552902b263a3673f4b096d7"} Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.444968 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34241ccfc31ea9081d97e5d507d05ba6f2ca6aeb7552902b263a3673f4b096d7" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.516010 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.584862 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs\") pod \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.585216 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config\") pod \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.585258 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config\") pod \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.585382 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle\") pod \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.585424 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7ld\" (UniqueName: \"kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld\") pod \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\" (UID: \"d7b131b1-8d60-4fa4-bb58-aca271fe6524\") " Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.593732 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld" (OuterVolumeSpecName: "kube-api-access-2f7ld") pod "d7b131b1-8d60-4fa4-bb58-aca271fe6524" (UID: "d7b131b1-8d60-4fa4-bb58-aca271fe6524"). InnerVolumeSpecName "kube-api-access-2f7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.608459 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d7b131b1-8d60-4fa4-bb58-aca271fe6524" (UID: "d7b131b1-8d60-4fa4-bb58-aca271fe6524"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.636103 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b131b1-8d60-4fa4-bb58-aca271fe6524" (UID: "d7b131b1-8d60-4fa4-bb58-aca271fe6524"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.648188 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config" (OuterVolumeSpecName: "config") pod "d7b131b1-8d60-4fa4-bb58-aca271fe6524" (UID: "d7b131b1-8d60-4fa4-bb58-aca271fe6524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.664662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d7b131b1-8d60-4fa4-bb58-aca271fe6524" (UID: "d7b131b1-8d60-4fa4-bb58-aca271fe6524"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.686993 4765 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.687066 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.687078 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.687090 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b131b1-8d60-4fa4-bb58-aca271fe6524-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.687126 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7ld\" (UniqueName: \"kubernetes.io/projected/d7b131b1-8d60-4fa4-bb58-aca271fe6524-kube-api-access-2f7ld\") on node \"crc\" DevicePath \"\"" Dec 03 20:55:57 crc kubenswrapper[4765]: I1203 20:55:57.791402 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 20:55:58 crc kubenswrapper[4765]: I1203 20:55:58.451508 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7655896996-bvmmg" Dec 03 20:55:58 crc kubenswrapper[4765]: I1203 20:55:58.480866 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:58 crc kubenswrapper[4765]: E1203 20:55:58.484744 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b131b1_8d60_4fa4_bb58_aca271fe6524.slice\": RecentStats: unable to find data in memory cache]" Dec 03 20:55:58 crc kubenswrapper[4765]: I1203 20:55:58.489534 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7655896996-bvmmg"] Dec 03 20:55:59 crc kubenswrapper[4765]: I1203 20:55:59.569200 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57c6f94f6-xmzln" Dec 03 20:55:59 crc kubenswrapper[4765]: I1203 20:55:59.968050 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:55:59 crc kubenswrapper[4765]: I1203 20:55:59.982018 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c98cc54bd-jknm9" Dec 03 20:56:00 crc kubenswrapper[4765]: I1203 20:56:00.370164 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" path="/var/lib/kubelet/pods/d7b131b1-8d60-4fa4-bb58-aca271fe6524/volumes" Dec 03 20:56:02 crc kubenswrapper[4765]: I1203 20:56:02.994015 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.836962 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qk68p"] Dec 03 20:56:03 crc kubenswrapper[4765]: E1203 20:56:03.837932 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api-log" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838025 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api-log" Dec 03 20:56:03 crc kubenswrapper[4765]: E1203 20:56:03.838113 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-httpd" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838180 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-httpd" Dec 03 20:56:03 crc kubenswrapper[4765]: E1203 20:56:03.838254 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838371 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api" Dec 03 20:56:03 crc kubenswrapper[4765]: E1203 20:56:03.838456 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-api" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838526 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-api" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838852 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.838973 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-httpd" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.839060 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b131b1-8d60-4fa4-bb58-aca271fe6524" containerName="neutron-api" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.839137 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="14809fec-dba4-4a1c-a145-7432194fe3cf" containerName="barbican-api-log" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.839966 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.847312 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qk68p"] Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.932087 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.935085 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.936637 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-l848w" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.936855 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.936927 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.940796 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.992105 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3373-account-create-update-zcll2"] Dec 03 20:56:03 crc kubenswrapper[4765]: I1203 20:56:03.993386 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.000332 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.002674 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qrslv"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.003908 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.014153 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3373-account-create-update-zcll2"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.024193 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.024342 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smbx8\" (UniqueName: \"kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.024793 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qrslv"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.107690 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mgkxp"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.108913 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.122866 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mgkxp"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125172 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125351 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125504 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125589 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwp4h\" (UniqueName: \"kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125779 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.125886 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smbx8\" (UniqueName: \"kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.126067 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.126160 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmd5\" (UniqueName: \"kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.126247 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnfd\" (UniqueName: \"kubernetes.io/projected/1b49db51-0762-4252-8b51-beed29a203a0-kube-api-access-qlnfd\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.126398 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.155914 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smbx8\" (UniqueName: \"kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8\") pod \"nova-api-db-create-qk68p\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.180381 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:04 crc kubenswrapper[4765]: E1203 20:56:04.181012 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-qlnfd openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="1b49db51-0762-4252-8b51-beed29a203a0" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.200525 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.200900 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.212269 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e997-account-create-update-m6fht"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.214103 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.215539 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.224998 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228333 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmd5\" (UniqueName: \"kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228423 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnfd\" (UniqueName: \"kubernetes.io/projected/1b49db51-0762-4252-8b51-beed29a203a0-kube-api-access-qlnfd\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgx9q\" (UniqueName: \"kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228641 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228656 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwp4h\" (UniqueName: \"kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228715 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228775 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.228856 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.230374 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.230533 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: E1203 20:56:04.230651 4765 projected.go:194] Error preparing data for projected volume kube-api-access-qlnfd for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (1b49db51-0762-4252-8b51-beed29a203a0) does not match the UID in record. The object might have been deleted and then recreated Dec 03 20:56:04 crc kubenswrapper[4765]: E1203 20:56:04.230711 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b49db51-0762-4252-8b51-beed29a203a0-kube-api-access-qlnfd podName:1b49db51-0762-4252-8b51-beed29a203a0 nodeName:}" failed. No retries permitted until 2025-12-03 20:56:04.730693182 +0000 UTC m=+1062.661238333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qlnfd" (UniqueName: "kubernetes.io/projected/1b49db51-0762-4252-8b51-beed29a203a0-kube-api-access-qlnfd") pod "openstackclient" (UID: "1b49db51-0762-4252-8b51-beed29a203a0") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (1b49db51-0762-4252-8b51-beed29a203a0) does not match the UID in record. The object might have been deleted and then recreated Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.231272 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.237863 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e997-account-create-update-m6fht"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.250063 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.250229 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.255165 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwp4h\" (UniqueName: \"kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h\") pod \"nova-api-3373-account-create-update-zcll2\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.255687 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.256868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmd5\" (UniqueName: \"kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5\") pod \"nova-cell0-db-create-qrslv\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.294872 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b283-account-create-update-jdflm"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.296908 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.299586 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.302909 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b283-account-create-update-jdflm"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.320198 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331665 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgx9q\" (UniqueName: \"kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331749 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331802 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j989f\" (UniqueName: \"kubernetes.io/projected/bd00d94a-54ce-420e-959d-4b10ecce11d0-kube-api-access-j989f\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331857 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlj2t\" (UniqueName: \"kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331951 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.331988 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.332028 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.332897 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.338738 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.372345 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgx9q\" (UniqueName: \"kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q\") pod \"nova-cell1-db-create-mgkxp\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.424644 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433720 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433779 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqc5\" (UniqueName: \"kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433805 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlj2t\" (UniqueName: \"kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.433923 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.434064 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.434097 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j989f\" (UniqueName: \"kubernetes.io/projected/bd00d94a-54ce-420e-959d-4b10ecce11d0-kube-api-access-j989f\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.435109 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.450492 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.450820 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.455800 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlj2t\" (UniqueName: \"kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t\") pod \"nova-cell0-e997-account-create-update-m6fht\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.457615 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j989f\" (UniqueName: \"kubernetes.io/projected/bd00d94a-54ce-420e-959d-4b10ecce11d0-kube-api-access-j989f\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.459332 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bd00d94a-54ce-420e-959d-4b10ecce11d0-openstack-config\") pod \"openstackclient\" (UID: \"bd00d94a-54ce-420e-959d-4b10ecce11d0\") " pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.534000 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.537014 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.537110 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqc5\" (UniqueName: \"kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.540862 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.556854 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qk68p"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.558877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqc5\" (UniqueName: \"kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5\") pod \"nova-cell1-b283-account-create-update-jdflm\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.598725 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.603704 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1b49db51-0762-4252-8b51-beed29a203a0" podUID="bd00d94a-54ce-420e-959d-4b10ecce11d0" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.617887 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.629915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.659517 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.740549 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret\") pod \"1b49db51-0762-4252-8b51-beed29a203a0\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.740627 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle\") pod \"1b49db51-0762-4252-8b51-beed29a203a0\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.740648 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config\") pod \"1b49db51-0762-4252-8b51-beed29a203a0\" (UID: \"1b49db51-0762-4252-8b51-beed29a203a0\") " Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.741027 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnfd\" (UniqueName: \"kubernetes.io/projected/1b49db51-0762-4252-8b51-beed29a203a0-kube-api-access-qlnfd\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.741477 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1b49db51-0762-4252-8b51-beed29a203a0" (UID: "1b49db51-0762-4252-8b51-beed29a203a0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.745442 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b49db51-0762-4252-8b51-beed29a203a0" (UID: "1b49db51-0762-4252-8b51-beed29a203a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.748634 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1b49db51-0762-4252-8b51-beed29a203a0" (UID: "1b49db51-0762-4252-8b51-beed29a203a0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.842587 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.842792 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b49db51-0762-4252-8b51-beed29a203a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.842804 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1b49db51-0762-4252-8b51-beed29a203a0-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.872217 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3373-account-create-update-zcll2"] Dec 03 20:56:04 crc kubenswrapper[4765]: W1203 20:56:04.880120 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf578a1fd_499c_4610_af9f_3e8ad5555749.slice/crio-38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df WatchSource:0}: Error finding container 38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df: Status 404 returned error can't find the container with id 38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.949375 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qrslv"] Dec 03 20:56:04 crc kubenswrapper[4765]: I1203 20:56:04.991813 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mgkxp"] Dec 03 20:56:05 crc kubenswrapper[4765]: W1203 20:56:05.002265 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b2f84ce_d56b_46e8_bcc4_0b65034e87a1.slice/crio-a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84 WatchSource:0}: Error finding container a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84: Status 404 returned error can't find the container with id a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84 Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.207719 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.279552 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e997-account-create-update-m6fht"] Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.423770 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b283-account-create-update-jdflm"] Dec 03 20:56:05 crc kubenswrapper[4765]: W1203 20:56:05.496365 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcbb6896_f557_424f_871f_2f5df6968bd6.slice/crio-0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf WatchSource:0}: Error finding container 0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf: Status 404 returned error can't find the container with id 0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.545623 4765 generic.go:334] "Generic (PLEG): container finished" podID="f6d90462-f312-4c5f-a20d-092d517b41e0" containerID="f3600cd37fa54b671bbd74790d8168820ebe8ca4eb3cd878fb18a40af2066af7" exitCode=0 Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.545696 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qrslv" event={"ID":"f6d90462-f312-4c5f-a20d-092d517b41e0","Type":"ContainerDied","Data":"f3600cd37fa54b671bbd74790d8168820ebe8ca4eb3cd878fb18a40af2066af7"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.545724 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qrslv" event={"ID":"f6d90462-f312-4c5f-a20d-092d517b41e0","Type":"ContainerStarted","Data":"bd76ebee67dd784c0c57ed80b08abd4bae03216db62cc7736fdace9e30b70b81"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.547260 4765 generic.go:334] "Generic (PLEG): container finished" podID="f578a1fd-499c-4610-af9f-3e8ad5555749" containerID="719a25f5480a5677e97fbecac2bb6abee5c94e127a294cc5763ff7b08afe19be" exitCode=0 Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.547330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3373-account-create-update-zcll2" event={"ID":"f578a1fd-499c-4610-af9f-3e8ad5555749","Type":"ContainerDied","Data":"719a25f5480a5677e97fbecac2bb6abee5c94e127a294cc5763ff7b08afe19be"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.547352 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3373-account-create-update-zcll2" event={"ID":"f578a1fd-499c-4610-af9f-3e8ad5555749","Type":"ContainerStarted","Data":"38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.548920 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e997-account-create-update-m6fht" event={"ID":"6a9b4f82-dda0-446d-97c5-b94c26326298","Type":"ContainerStarted","Data":"50fec840002354304b4338ecd270b8909a02a47660cfb015e162707dc70c3b84"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.550671 4765 generic.go:334] "Generic (PLEG): container finished" podID="0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" containerID="75c829f86ea1e72d6db4b3b1fb3171b69f34e2fa86fee5ec0303c4c1de4380d6" exitCode=0 Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.550722 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mgkxp" event={"ID":"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1","Type":"ContainerDied","Data":"75c829f86ea1e72d6db4b3b1fb3171b69f34e2fa86fee5ec0303c4c1de4380d6"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.550752 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mgkxp" event={"ID":"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1","Type":"ContainerStarted","Data":"a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.551982 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bd00d94a-54ce-420e-959d-4b10ecce11d0","Type":"ContainerStarted","Data":"5b057c341e3e275e1732befac22782d331b0fa6630468d49caad98b6dcfa52b5"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.553681 4765 generic.go:334] "Generic (PLEG): container finished" podID="977d6049-5c13-47b0-8a4e-c62fea3cd6d7" containerID="67f7ed176fc4e23352446441719955da13f6cdc2c61d8f4174e64ed5b8d83666" exitCode=0 Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.553741 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qk68p" event={"ID":"977d6049-5c13-47b0-8a4e-c62fea3cd6d7","Type":"ContainerDied","Data":"67f7ed176fc4e23352446441719955da13f6cdc2c61d8f4174e64ed5b8d83666"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.553767 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qk68p" event={"ID":"977d6049-5c13-47b0-8a4e-c62fea3cd6d7","Type":"ContainerStarted","Data":"24b385b818e752750d12bca5653b74f118b57435d76cff42793e463fd8e7a747"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.554937 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b283-account-create-update-jdflm" event={"ID":"bcbb6896-f557-424f-871f-2f5df6968bd6","Type":"ContainerStarted","Data":"0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf"} Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.554965 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 20:56:05 crc kubenswrapper[4765]: I1203 20:56:05.613620 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1b49db51-0762-4252-8b51-beed29a203a0" podUID="bd00d94a-54ce-420e-959d-4b10ecce11d0" Dec 03 20:56:06 crc kubenswrapper[4765]: I1203 20:56:06.370042 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b49db51-0762-4252-8b51-beed29a203a0" path="/var/lib/kubelet/pods/1b49db51-0762-4252-8b51-beed29a203a0/volumes" Dec 03 20:56:06 crc kubenswrapper[4765]: I1203 20:56:06.573393 4765 generic.go:334] "Generic (PLEG): container finished" podID="6a9b4f82-dda0-446d-97c5-b94c26326298" containerID="21db9182694416f5ed515caa7a2995e2ac8decae61b121dbe8b19d7ab15df1ef" exitCode=0 Dec 03 20:56:06 crc kubenswrapper[4765]: I1203 20:56:06.573439 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e997-account-create-update-m6fht" event={"ID":"6a9b4f82-dda0-446d-97c5-b94c26326298","Type":"ContainerDied","Data":"21db9182694416f5ed515caa7a2995e2ac8decae61b121dbe8b19d7ab15df1ef"} Dec 03 20:56:06 crc kubenswrapper[4765]: I1203 20:56:06.578207 4765 generic.go:334] "Generic (PLEG): container finished" podID="bcbb6896-f557-424f-871f-2f5df6968bd6" containerID="3d94ce243bfd1ef677a4258fad3c3e0be1b85c3a5d6209a039176b75a85c2927" exitCode=0 Dec 03 20:56:06 crc kubenswrapper[4765]: I1203 20:56:06.578344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b283-account-create-update-jdflm" event={"ID":"bcbb6896-f557-424f-871f-2f5df6968bd6","Type":"ContainerDied","Data":"3d94ce243bfd1ef677a4258fad3c3e0be1b85c3a5d6209a039176b75a85c2927"} Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.011804 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.149684 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.190246 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.195827 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrmd5\" (UniqueName: \"kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5\") pod \"f6d90462-f312-4c5f-a20d-092d517b41e0\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.196714 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts\") pod \"f6d90462-f312-4c5f-a20d-092d517b41e0\" (UID: \"f6d90462-f312-4c5f-a20d-092d517b41e0\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.196760 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.197229 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6d90462-f312-4c5f-a20d-092d517b41e0" (UID: "f6d90462-f312-4c5f-a20d-092d517b41e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.197866 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d90462-f312-4c5f-a20d-092d517b41e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.202885 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5" (OuterVolumeSpecName: "kube-api-access-zrmd5") pod "f6d90462-f312-4c5f-a20d-092d517b41e0" (UID: "f6d90462-f312-4c5f-a20d-092d517b41e0"). InnerVolumeSpecName "kube-api-access-zrmd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.299793 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts\") pod \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.299870 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts\") pod \"f578a1fd-499c-4610-af9f-3e8ad5555749\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.299944 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts\") pod \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.300056 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgx9q\" (UniqueName: \"kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q\") pod \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\" (UID: \"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.300096 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smbx8\" (UniqueName: \"kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8\") pod \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\" (UID: \"977d6049-5c13-47b0-8a4e-c62fea3cd6d7\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.300173 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwp4h\" (UniqueName: \"kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h\") pod \"f578a1fd-499c-4610-af9f-3e8ad5555749\" (UID: \"f578a1fd-499c-4610-af9f-3e8ad5555749\") " Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.300614 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrmd5\" (UniqueName: \"kubernetes.io/projected/f6d90462-f312-4c5f-a20d-092d517b41e0-kube-api-access-zrmd5\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.301364 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "977d6049-5c13-47b0-8a4e-c62fea3cd6d7" (UID: "977d6049-5c13-47b0-8a4e-c62fea3cd6d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.301740 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" (UID: "0b2f84ce-d56b-46e8-bcc4-0b65034e87a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.302059 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f578a1fd-499c-4610-af9f-3e8ad5555749" (UID: "f578a1fd-499c-4610-af9f-3e8ad5555749"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.304933 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q" (OuterVolumeSpecName: "kube-api-access-cgx9q") pod "0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" (UID: "0b2f84ce-d56b-46e8-bcc4-0b65034e87a1"). InnerVolumeSpecName "kube-api-access-cgx9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.305058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h" (OuterVolumeSpecName: "kube-api-access-pwp4h") pod "f578a1fd-499c-4610-af9f-3e8ad5555749" (UID: "f578a1fd-499c-4610-af9f-3e8ad5555749"). InnerVolumeSpecName "kube-api-access-pwp4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.307471 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8" (OuterVolumeSpecName: "kube-api-access-smbx8") pod "977d6049-5c13-47b0-8a4e-c62fea3cd6d7" (UID: "977d6049-5c13-47b0-8a4e-c62fea3cd6d7"). InnerVolumeSpecName "kube-api-access-smbx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403004 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgx9q\" (UniqueName: \"kubernetes.io/projected/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-kube-api-access-cgx9q\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403047 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smbx8\" (UniqueName: \"kubernetes.io/projected/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-kube-api-access-smbx8\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403059 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwp4h\" (UniqueName: \"kubernetes.io/projected/f578a1fd-499c-4610-af9f-3e8ad5555749-kube-api-access-pwp4h\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403070 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403082 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f578a1fd-499c-4610-af9f-3e8ad5555749-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.403092 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/977d6049-5c13-47b0-8a4e-c62fea3cd6d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.591460 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qk68p" event={"ID":"977d6049-5c13-47b0-8a4e-c62fea3cd6d7","Type":"ContainerDied","Data":"24b385b818e752750d12bca5653b74f118b57435d76cff42793e463fd8e7a747"} Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.591514 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b385b818e752750d12bca5653b74f118b57435d76cff42793e463fd8e7a747" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.591569 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qk68p" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.602527 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qrslv" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.602525 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qrslv" event={"ID":"f6d90462-f312-4c5f-a20d-092d517b41e0","Type":"ContainerDied","Data":"bd76ebee67dd784c0c57ed80b08abd4bae03216db62cc7736fdace9e30b70b81"} Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.602570 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd76ebee67dd784c0c57ed80b08abd4bae03216db62cc7736fdace9e30b70b81" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.604794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3373-account-create-update-zcll2" event={"ID":"f578a1fd-499c-4610-af9f-3e8ad5555749","Type":"ContainerDied","Data":"38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df"} Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.604821 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38383eaf149706d83532f1bd472c678d02d5742449b3c5b82c980afecd3724df" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.604822 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3373-account-create-update-zcll2" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.608894 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mgkxp" Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.609167 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mgkxp" event={"ID":"0b2f84ce-d56b-46e8-bcc4-0b65034e87a1","Type":"ContainerDied","Data":"a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84"} Dec 03 20:56:07 crc kubenswrapper[4765]: I1203 20:56:07.609204 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a02f41b7065ce1168395f56609513a1bec876aa0eea8126100ad46cf53d9de84" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.000141 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.027815 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.114261 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqc5\" (UniqueName: \"kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5\") pod \"bcbb6896-f557-424f-871f-2f5df6968bd6\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.114450 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts\") pod \"bcbb6896-f557-424f-871f-2f5df6968bd6\" (UID: \"bcbb6896-f557-424f-871f-2f5df6968bd6\") " Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.114884 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcbb6896-f557-424f-871f-2f5df6968bd6" (UID: "bcbb6896-f557-424f-871f-2f5df6968bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.118044 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5" (OuterVolumeSpecName: "kube-api-access-5dqc5") pod "bcbb6896-f557-424f-871f-2f5df6968bd6" (UID: "bcbb6896-f557-424f-871f-2f5df6968bd6"). InnerVolumeSpecName "kube-api-access-5dqc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.215561 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlj2t\" (UniqueName: \"kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t\") pod \"6a9b4f82-dda0-446d-97c5-b94c26326298\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.215643 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts\") pod \"6a9b4f82-dda0-446d-97c5-b94c26326298\" (UID: \"6a9b4f82-dda0-446d-97c5-b94c26326298\") " Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.216074 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a9b4f82-dda0-446d-97c5-b94c26326298" (UID: "6a9b4f82-dda0-446d-97c5-b94c26326298"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.216133 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcbb6896-f557-424f-871f-2f5df6968bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.216144 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqc5\" (UniqueName: \"kubernetes.io/projected/bcbb6896-f557-424f-871f-2f5df6968bd6-kube-api-access-5dqc5\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.222790 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t" (OuterVolumeSpecName: "kube-api-access-vlj2t") pod "6a9b4f82-dda0-446d-97c5-b94c26326298" (UID: "6a9b4f82-dda0-446d-97c5-b94c26326298"). InnerVolumeSpecName "kube-api-access-vlj2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.318642 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlj2t\" (UniqueName: \"kubernetes.io/projected/6a9b4f82-dda0-446d-97c5-b94c26326298-kube-api-access-vlj2t\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.318702 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9b4f82-dda0-446d-97c5-b94c26326298-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.645665 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b283-account-create-update-jdflm" event={"ID":"bcbb6896-f557-424f-871f-2f5df6968bd6","Type":"ContainerDied","Data":"0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf"} Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.645707 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdf49368f2616e488371112e7543463514068f98d11500cbfdc0ede047fdebf" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.645761 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b283-account-create-update-jdflm" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.650266 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e997-account-create-update-m6fht" event={"ID":"6a9b4f82-dda0-446d-97c5-b94c26326298","Type":"ContainerDied","Data":"50fec840002354304b4338ecd270b8909a02a47660cfb015e162707dc70c3b84"} Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.650317 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fec840002354304b4338ecd270b8909a02a47660cfb015e162707dc70c3b84" Dec 03 20:56:08 crc kubenswrapper[4765]: I1203 20:56:08.650360 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e997-account-create-update-m6fht" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.527857 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-llggj"] Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528465 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9b4f82-dda0-446d-97c5-b94c26326298" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528485 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9b4f82-dda0-446d-97c5-b94c26326298" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528500 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d90462-f312-4c5f-a20d-092d517b41e0" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528506 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d90462-f312-4c5f-a20d-092d517b41e0" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528520 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f578a1fd-499c-4610-af9f-3e8ad5555749" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528527 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f578a1fd-499c-4610-af9f-3e8ad5555749" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528538 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbb6896-f557-424f-871f-2f5df6968bd6" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528544 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbb6896-f557-424f-871f-2f5df6968bd6" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528559 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977d6049-5c13-47b0-8a4e-c62fea3cd6d7" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528565 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="977d6049-5c13-47b0-8a4e-c62fea3cd6d7" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: E1203 20:56:09.528579 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528585 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528740 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9b4f82-dda0-446d-97c5-b94c26326298" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528752 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d90462-f312-4c5f-a20d-092d517b41e0" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528762 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528778 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbb6896-f557-424f-871f-2f5df6968bd6" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528786 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="977d6049-5c13-47b0-8a4e-c62fea3cd6d7" containerName="mariadb-database-create" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.528793 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f578a1fd-499c-4610-af9f-3e8ad5555749" containerName="mariadb-account-create-update" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.529348 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.537541 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fws5g" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.537743 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.537876 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.553371 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-llggj"] Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.642705 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.642826 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.642862 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh88b\" (UniqueName: \"kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.642895 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.744834 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.744943 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.744991 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh88b\" (UniqueName: \"kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.745023 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.750352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.750445 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.751741 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.759724 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh88b\" (UniqueName: \"kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b\") pod \"nova-cell0-conductor-db-sync-llggj\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:09 crc kubenswrapper[4765]: I1203 20:56:09.860224 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:11 crc kubenswrapper[4765]: I1203 20:56:11.585240 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 20:56:14 crc kubenswrapper[4765]: I1203 20:56:14.612706 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-llggj"] Dec 03 20:56:14 crc kubenswrapper[4765]: W1203 20:56:14.613269 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b6e2c85_155f_4b33_b3c7_fb8984ebab25.slice/crio-4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa WatchSource:0}: Error finding container 4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa: Status 404 returned error can't find the container with id 4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa Dec 03 20:56:14 crc kubenswrapper[4765]: I1203 20:56:14.712572 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bd00d94a-54ce-420e-959d-4b10ecce11d0","Type":"ContainerStarted","Data":"e15ec74c486fc7f1b26d0c9abbb6e00beb4927b6b64a89c8a90070dd5a849e41"} Dec 03 20:56:14 crc kubenswrapper[4765]: I1203 20:56:14.714732 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-llggj" event={"ID":"8b6e2c85-155f-4b33-b3c7-fb8984ebab25","Type":"ContainerStarted","Data":"4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa"} Dec 03 20:56:14 crc kubenswrapper[4765]: I1203 20:56:14.734773 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.835360418 podStartE2EDuration="10.734751849s" podCreationTimestamp="2025-12-03 20:56:04 +0000 UTC" firstStartedPulling="2025-12-03 20:56:05.224508276 +0000 UTC m=+1063.155053427" lastFinishedPulling="2025-12-03 20:56:14.123899697 +0000 UTC m=+1072.054444858" observedRunningTime="2025-12-03 20:56:14.725887651 +0000 UTC m=+1072.656432822" watchObservedRunningTime="2025-12-03 20:56:14.734751849 +0000 UTC m=+1072.665297000" Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.366182 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.366830 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-central-agent" containerID="cri-o://9f232c9fe5d940e6d58a46cdafd4b8e3fe6e94038c790de89e723477397d9b38" gracePeriod=30 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.366891 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="sg-core" containerID="cri-o://843e4d6f431325a58714c2dce76d78766e3277f8a48aed80ccc5ee86c6550c5f" gracePeriod=30 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.366904 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="proxy-httpd" containerID="cri-o://a14b94a66afb0869ac3066a5260145efb439f9a646f9c07cc75950be2f8f4c18" gracePeriod=30 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.366916 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-notification-agent" containerID="cri-o://91b493be6364a8679a934ac275d10acc058851fc1a50ebac7fece811a6f7280a" gracePeriod=30 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.434806 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.435067 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" containerName="kube-state-metrics" containerID="cri-o://0b175116666f690f63043dc4ae2bf28765d421d3174452db555139f9aba60604" gracePeriod=30 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.749572 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerID="a14b94a66afb0869ac3066a5260145efb439f9a646f9c07cc75950be2f8f4c18" exitCode=0 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.749607 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerID="843e4d6f431325a58714c2dce76d78766e3277f8a48aed80ccc5ee86c6550c5f" exitCode=2 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.749652 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerDied","Data":"a14b94a66afb0869ac3066a5260145efb439f9a646f9c07cc75950be2f8f4c18"} Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.749684 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerDied","Data":"843e4d6f431325a58714c2dce76d78766e3277f8a48aed80ccc5ee86c6550c5f"} Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.754977 4765 generic.go:334] "Generic (PLEG): container finished" podID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" containerID="0b175116666f690f63043dc4ae2bf28765d421d3174452db555139f9aba60604" exitCode=2 Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.756158 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7","Type":"ContainerDied","Data":"0b175116666f690f63043dc4ae2bf28765d421d3174452db555139f9aba60604"} Dec 03 20:56:15 crc kubenswrapper[4765]: I1203 20:56:15.890531 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.010735 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4r8j\" (UniqueName: \"kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j\") pod \"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7\" (UID: \"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7\") " Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.016497 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j" (OuterVolumeSpecName: "kube-api-access-l4r8j") pod "ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" (UID: "ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7"). InnerVolumeSpecName "kube-api-access-l4r8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.113902 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4r8j\" (UniqueName: \"kubernetes.io/projected/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7-kube-api-access-l4r8j\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.764254 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7","Type":"ContainerDied","Data":"5606a55f4755aa3fc90660c00fd67d8e244fdea4f2818ffdd2df82e688e8f6b7"} Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.764934 4765 scope.go:117] "RemoveContainer" containerID="0b175116666f690f63043dc4ae2bf28765d421d3174452db555139f9aba60604" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.764464 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.769700 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerID="9f232c9fe5d940e6d58a46cdafd4b8e3fe6e94038c790de89e723477397d9b38" exitCode=0 Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.769753 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerDied","Data":"9f232c9fe5d940e6d58a46cdafd4b8e3fe6e94038c790de89e723477397d9b38"} Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.812788 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.846558 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.855140 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:16 crc kubenswrapper[4765]: E1203 20:56:16.855496 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" containerName="kube-state-metrics" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.855512 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" containerName="kube-state-metrics" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.855696 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" containerName="kube-state-metrics" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.856230 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.858873 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.859124 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 03 20:56:16 crc kubenswrapper[4765]: I1203 20:56:16.867220 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.031884 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.031983 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.032080 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxcd\" (UniqueName: \"kubernetes.io/projected/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-api-access-cnxcd\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.032100 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.133683 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxcd\" (UniqueName: \"kubernetes.io/projected/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-api-access-cnxcd\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.133721 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.133761 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.133836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.138774 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.143347 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.143454 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.148574 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxcd\" (UniqueName: \"kubernetes.io/projected/5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b-kube-api-access-cnxcd\") pod \"kube-state-metrics-0\" (UID: \"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b\") " pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.180153 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.629693 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 20:56:17 crc kubenswrapper[4765]: W1203 20:56:17.642556 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d1a0b5d_2754_4bb8_bc2c_e20ba9631e8b.slice/crio-b3151a4894fcd69c29f84c46261cbb054a616a5910fa605c70ba355f2d6f644a WatchSource:0}: Error finding container b3151a4894fcd69c29f84c46261cbb054a616a5910fa605c70ba355f2d6f644a: Status 404 returned error can't find the container with id b3151a4894fcd69c29f84c46261cbb054a616a5910fa605c70ba355f2d6f644a Dec 03 20:56:17 crc kubenswrapper[4765]: I1203 20:56:17.800920 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b","Type":"ContainerStarted","Data":"b3151a4894fcd69c29f84c46261cbb054a616a5910fa605c70ba355f2d6f644a"} Dec 03 20:56:18 crc kubenswrapper[4765]: I1203 20:56:18.370867 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7" path="/var/lib/kubelet/pods/ff8b5340-cd1b-4a69-a4ea-2ad1bb35b8a7/volumes" Dec 03 20:56:19 crc kubenswrapper[4765]: I1203 20:56:19.837553 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerID="91b493be6364a8679a934ac275d10acc058851fc1a50ebac7fece811a6f7280a" exitCode=0 Dec 03 20:56:19 crc kubenswrapper[4765]: I1203 20:56:19.837609 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerDied","Data":"91b493be6364a8679a934ac275d10acc058851fc1a50ebac7fece811a6f7280a"} Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.247960 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358045 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358137 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358209 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358258 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358355 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358412 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2whvp\" (UniqueName: \"kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358475 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts\") pod \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\" (UID: \"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad\") " Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358544 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358865 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.358886 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.370427 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp" (OuterVolumeSpecName: "kube-api-access-2whvp") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "kube-api-access-2whvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.370448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts" (OuterVolumeSpecName: "scripts") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.383076 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.422894 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.447608 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data" (OuterVolumeSpecName: "config-data") pod "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" (UID: "fc69ab20-8973-4b27-a0e0-818ef2a4b1ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.461600 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.462061 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.462077 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.462089 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.462099 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2whvp\" (UniqueName: \"kubernetes.io/projected/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-kube-api-access-2whvp\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.462108 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.888021 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b","Type":"ContainerStarted","Data":"adef79a87bcb37f2466d7618f7d20c702a9b5d33cb65a9b03be35d455ba16c27"} Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.889194 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.891354 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc69ab20-8973-4b27-a0e0-818ef2a4b1ad","Type":"ContainerDied","Data":"93fedbba1de8627b7cf8d4745d3f85f7ad97d2b60247c28690b37cab2aac9688"} Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.891393 4765 scope.go:117] "RemoveContainer" containerID="a14b94a66afb0869ac3066a5260145efb439f9a646f9c07cc75950be2f8f4c18" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.891493 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.896388 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-llggj" event={"ID":"8b6e2c85-155f-4b33-b3c7-fb8984ebab25","Type":"ContainerStarted","Data":"440aabab35b3e1374dabf31e92da5c1fab2b7bb5aef36c753ac613f36e1d7b9f"} Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.917153 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.593490054 podStartE2EDuration="8.917136017s" podCreationTimestamp="2025-12-03 20:56:16 +0000 UTC" firstStartedPulling="2025-12-03 20:56:17.645534305 +0000 UTC m=+1075.576079456" lastFinishedPulling="2025-12-03 20:56:23.969180268 +0000 UTC m=+1081.899725419" observedRunningTime="2025-12-03 20:56:24.91135937 +0000 UTC m=+1082.841904541" watchObservedRunningTime="2025-12-03 20:56:24.917136017 +0000 UTC m=+1082.847681168" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.922296 4765 scope.go:117] "RemoveContainer" containerID="843e4d6f431325a58714c2dce76d78766e3277f8a48aed80ccc5ee86c6550c5f" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.933669 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-llggj" podStartSLOduration=6.504490263 podStartE2EDuration="15.933644641s" podCreationTimestamp="2025-12-03 20:56:09 +0000 UTC" firstStartedPulling="2025-12-03 20:56:14.615402832 +0000 UTC m=+1072.545947983" lastFinishedPulling="2025-12-03 20:56:24.0445572 +0000 UTC m=+1081.975102361" observedRunningTime="2025-12-03 20:56:24.927572477 +0000 UTC m=+1082.858117628" watchObservedRunningTime="2025-12-03 20:56:24.933644641 +0000 UTC m=+1082.864189792" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.947925 4765 scope.go:117] "RemoveContainer" containerID="91b493be6364a8679a934ac275d10acc058851fc1a50ebac7fece811a6f7280a" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.956903 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.966402 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.995535 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:24 crc kubenswrapper[4765]: E1203 20:56:24.996530 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-central-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.996657 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-central-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: E1203 20:56:24.996724 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="sg-core" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.996778 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="sg-core" Dec 03 20:56:24 crc kubenswrapper[4765]: E1203 20:56:24.996844 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-notification-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.996915 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-notification-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: E1203 20:56:24.996987 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="proxy-httpd" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.997037 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="proxy-httpd" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.997373 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="proxy-httpd" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.997473 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-notification-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.997556 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="sg-core" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.997626 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" containerName="ceilometer-central-agent" Dec 03 20:56:24 crc kubenswrapper[4765]: I1203 20:56:24.999427 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.002321 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.002489 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.002695 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.004181 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.031998 4765 scope.go:117] "RemoveContainer" containerID="9f232c9fe5d940e6d58a46cdafd4b8e3fe6e94038c790de89e723477397d9b38" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073088 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073136 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073165 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b694j\" (UniqueName: \"kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073184 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073245 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073281 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.073347 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175132 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175187 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b694j\" (UniqueName: \"kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175214 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175242 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175278 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175337 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.175431 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.176341 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.177104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.179191 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.179715 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.180562 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.182602 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.198660 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b694j\" (UniqueName: \"kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.199097 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.334780 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.813052 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:25 crc kubenswrapper[4765]: I1203 20:56:25.909974 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerStarted","Data":"03dc880bb279a7992b840bc59a656b79abd094dae7bd5ea9dc77c23534a3fb60"} Dec 03 20:56:26 crc kubenswrapper[4765]: I1203 20:56:26.373415 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc69ab20-8973-4b27-a0e0-818ef2a4b1ad" path="/var/lib/kubelet/pods/fc69ab20-8973-4b27-a0e0-818ef2a4b1ad/volumes" Dec 03 20:56:26 crc kubenswrapper[4765]: I1203 20:56:26.918985 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerStarted","Data":"5d1ec231bcb9996723eeee74f4438747369d406bb628bf57e24e4a10c4578cc4"} Dec 03 20:56:27 crc kubenswrapper[4765]: I1203 20:56:27.927854 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerStarted","Data":"02ad1669e9c640e7f24dd8006d8047c5a6a36e741fdecae4c819b0f4ed6435d4"} Dec 03 20:56:30 crc kubenswrapper[4765]: I1203 20:56:30.958097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerStarted","Data":"ecbe7e9a195aa643d8b3d0e8cc14bb8f1e1f48f9861bb9c0bc4c2cbdf3a8627f"} Dec 03 20:56:31 crc kubenswrapper[4765]: I1203 20:56:31.629686 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993009 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerStarted","Data":"07e549b9e3702b44308585976e9a528e117bd7a1b2793d1a56bd822b7c587aec"} Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993404 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993205 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-central-agent" containerID="cri-o://5d1ec231bcb9996723eeee74f4438747369d406bb628bf57e24e4a10c4578cc4" gracePeriod=30 Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993253 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-notification-agent" containerID="cri-o://02ad1669e9c640e7f24dd8006d8047c5a6a36e741fdecae4c819b0f4ed6435d4" gracePeriod=30 Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993253 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="proxy-httpd" containerID="cri-o://07e549b9e3702b44308585976e9a528e117bd7a1b2793d1a56bd822b7c587aec" gracePeriod=30 Dec 03 20:56:32 crc kubenswrapper[4765]: I1203 20:56:32.993232 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="sg-core" containerID="cri-o://ecbe7e9a195aa643d8b3d0e8cc14bb8f1e1f48f9861bb9c0bc4c2cbdf3a8627f" gracePeriod=30 Dec 03 20:56:33 crc kubenswrapper[4765]: I1203 20:56:33.021434 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.906966231 podStartE2EDuration="9.021417068s" podCreationTimestamp="2025-12-03 20:56:24 +0000 UTC" firstStartedPulling="2025-12-03 20:56:25.807205823 +0000 UTC m=+1083.737750974" lastFinishedPulling="2025-12-03 20:56:31.92165666 +0000 UTC m=+1089.852201811" observedRunningTime="2025-12-03 20:56:33.019178518 +0000 UTC m=+1090.949723669" watchObservedRunningTime="2025-12-03 20:56:33.021417068 +0000 UTC m=+1090.951962219" Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.003688 4765 generic.go:334] "Generic (PLEG): container finished" podID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerID="07e549b9e3702b44308585976e9a528e117bd7a1b2793d1a56bd822b7c587aec" exitCode=0 Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.004051 4765 generic.go:334] "Generic (PLEG): container finished" podID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerID="ecbe7e9a195aa643d8b3d0e8cc14bb8f1e1f48f9861bb9c0bc4c2cbdf3a8627f" exitCode=2 Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.003751 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerDied","Data":"07e549b9e3702b44308585976e9a528e117bd7a1b2793d1a56bd822b7c587aec"} Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.004108 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerDied","Data":"ecbe7e9a195aa643d8b3d0e8cc14bb8f1e1f48f9861bb9c0bc4c2cbdf3a8627f"} Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.004125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerDied","Data":"02ad1669e9c640e7f24dd8006d8047c5a6a36e741fdecae4c819b0f4ed6435d4"} Dec 03 20:56:34 crc kubenswrapper[4765]: I1203 20:56:34.004061 4765 generic.go:334] "Generic (PLEG): container finished" podID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerID="02ad1669e9c640e7f24dd8006d8047c5a6a36e741fdecae4c819b0f4ed6435d4" exitCode=0 Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.024012 4765 generic.go:334] "Generic (PLEG): container finished" podID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerID="5d1ec231bcb9996723eeee74f4438747369d406bb628bf57e24e4a10c4578cc4" exitCode=0 Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.024092 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerDied","Data":"5d1ec231bcb9996723eeee74f4438747369d406bb628bf57e24e4a10c4578cc4"} Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.144926 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.311446 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.311817 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.311923 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312104 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312290 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312430 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312500 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312485 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b694j\" (UniqueName: \"kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312873 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.312921 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml\") pod \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\" (UID: \"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc\") " Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.313676 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.313699 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.317437 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j" (OuterVolumeSpecName: "kube-api-access-b694j") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "kube-api-access-b694j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.317498 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts" (OuterVolumeSpecName: "scripts") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.351458 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.385250 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.399206 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.415460 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.415496 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b694j\" (UniqueName: \"kubernetes.io/projected/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-kube-api-access-b694j\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.415507 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.415516 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.415524 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.422721 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data" (OuterVolumeSpecName: "config-data") pod "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" (UID: "c87ff6a6-9afe-4d08-9a72-7262fbde4fbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:36 crc kubenswrapper[4765]: I1203 20:56:36.517214 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.040590 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c87ff6a6-9afe-4d08-9a72-7262fbde4fbc","Type":"ContainerDied","Data":"03dc880bb279a7992b840bc59a656b79abd094dae7bd5ea9dc77c23534a3fb60"} Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.040699 4765 scope.go:117] "RemoveContainer" containerID="07e549b9e3702b44308585976e9a528e117bd7a1b2793d1a56bd822b7c587aec" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.042008 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.087516 4765 scope.go:117] "RemoveContainer" containerID="ecbe7e9a195aa643d8b3d0e8cc14bb8f1e1f48f9861bb9c0bc4c2cbdf3a8627f" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.101808 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.112899 4765 scope.go:117] "RemoveContainer" containerID="02ad1669e9c640e7f24dd8006d8047c5a6a36e741fdecae4c819b0f4ed6435d4" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.129905 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.138680 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:37 crc kubenswrapper[4765]: E1203 20:56:37.139042 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="proxy-httpd" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139061 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="proxy-httpd" Dec 03 20:56:37 crc kubenswrapper[4765]: E1203 20:56:37.139072 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-notification-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139079 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-notification-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: E1203 20:56:37.139098 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="sg-core" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139105 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="sg-core" Dec 03 20:56:37 crc kubenswrapper[4765]: E1203 20:56:37.139115 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-central-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139120 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-central-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139325 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-central-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139338 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="sg-core" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139356 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="ceilometer-notification-agent" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.139372 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" containerName="proxy-httpd" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.140934 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.144869 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.145424 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.145962 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.164814 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.165024 4765 scope.go:117] "RemoveContainer" containerID="5d1ec231bcb9996723eeee74f4438747369d406bb628bf57e24e4a10c4578cc4" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.192409 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232393 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232477 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232588 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232625 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkhl\" (UniqueName: \"kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232719 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232786 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232832 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.232928 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335112 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335160 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335189 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pkhl\" (UniqueName: \"kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335243 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335327 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.335397 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.336201 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.336535 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.341176 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.342199 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.345851 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.349265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.351768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.354588 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pkhl\" (UniqueName: \"kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl\") pod \"ceilometer-0\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.469856 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:56:37 crc kubenswrapper[4765]: I1203 20:56:37.928129 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:56:38 crc kubenswrapper[4765]: I1203 20:56:38.051395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerStarted","Data":"8fe6b8f641ee11c4c5e07866b8dd3840b8064351a2ebd1554af7baf157374752"} Dec 03 20:56:38 crc kubenswrapper[4765]: I1203 20:56:38.054597 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-llggj" event={"ID":"8b6e2c85-155f-4b33-b3c7-fb8984ebab25","Type":"ContainerDied","Data":"440aabab35b3e1374dabf31e92da5c1fab2b7bb5aef36c753ac613f36e1d7b9f"} Dec 03 20:56:38 crc kubenswrapper[4765]: I1203 20:56:38.054527 4765 generic.go:334] "Generic (PLEG): container finished" podID="8b6e2c85-155f-4b33-b3c7-fb8984ebab25" containerID="440aabab35b3e1374dabf31e92da5c1fab2b7bb5aef36c753ac613f36e1d7b9f" exitCode=0 Dec 03 20:56:38 crc kubenswrapper[4765]: I1203 20:56:38.373374 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87ff6a6-9afe-4d08-9a72-7262fbde4fbc" path="/var/lib/kubelet/pods/c87ff6a6-9afe-4d08-9a72-7262fbde4fbc/volumes" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.071325 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerStarted","Data":"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39"} Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.418981 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.484668 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts\") pod \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.484718 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle\") pod \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.484759 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data\") pod \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.484796 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh88b\" (UniqueName: \"kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b\") pod \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\" (UID: \"8b6e2c85-155f-4b33-b3c7-fb8984ebab25\") " Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.489419 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts" (OuterVolumeSpecName: "scripts") pod "8b6e2c85-155f-4b33-b3c7-fb8984ebab25" (UID: "8b6e2c85-155f-4b33-b3c7-fb8984ebab25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.490569 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b" (OuterVolumeSpecName: "kube-api-access-zh88b") pod "8b6e2c85-155f-4b33-b3c7-fb8984ebab25" (UID: "8b6e2c85-155f-4b33-b3c7-fb8984ebab25"). InnerVolumeSpecName "kube-api-access-zh88b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.513930 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b6e2c85-155f-4b33-b3c7-fb8984ebab25" (UID: "8b6e2c85-155f-4b33-b3c7-fb8984ebab25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.514660 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data" (OuterVolumeSpecName: "config-data") pod "8b6e2c85-155f-4b33-b3c7-fb8984ebab25" (UID: "8b6e2c85-155f-4b33-b3c7-fb8984ebab25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.587580 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.587878 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.588004 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:39 crc kubenswrapper[4765]: I1203 20:56:39.588115 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh88b\" (UniqueName: \"kubernetes.io/projected/8b6e2c85-155f-4b33-b3c7-fb8984ebab25-kube-api-access-zh88b\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.089459 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerStarted","Data":"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a"} Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.098522 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerStarted","Data":"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09"} Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.098579 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-llggj" event={"ID":"8b6e2c85-155f-4b33-b3c7-fb8984ebab25","Type":"ContainerDied","Data":"4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa"} Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.098600 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa083ae51e4c09c2a0cb088cdd3701c40a2a02eeccc68f24b0387f6b3615daa" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.092423 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-llggj" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.199288 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 20:56:40 crc kubenswrapper[4765]: E1203 20:56:40.199756 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6e2c85-155f-4b33-b3c7-fb8984ebab25" containerName="nova-cell0-conductor-db-sync" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.199778 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6e2c85-155f-4b33-b3c7-fb8984ebab25" containerName="nova-cell0-conductor-db-sync" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.199992 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6e2c85-155f-4b33-b3c7-fb8984ebab25" containerName="nova-cell0-conductor-db-sync" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.200721 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.206211 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.206571 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fws5g" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.221154 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.303097 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.303144 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldb2d\" (UniqueName: \"kubernetes.io/projected/fae73c00-5619-4157-9bf2-4996314616aa-kube-api-access-ldb2d\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.303169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.404326 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.404378 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldb2d\" (UniqueName: \"kubernetes.io/projected/fae73c00-5619-4157-9bf2-4996314616aa-kube-api-access-ldb2d\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.404401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.410216 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.410418 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae73c00-5619-4157-9bf2-4996314616aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.434974 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldb2d\" (UniqueName: \"kubernetes.io/projected/fae73c00-5619-4157-9bf2-4996314616aa-kube-api-access-ldb2d\") pod \"nova-cell0-conductor-0\" (UID: \"fae73c00-5619-4157-9bf2-4996314616aa\") " pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:40 crc kubenswrapper[4765]: I1203 20:56:40.517956 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:41 crc kubenswrapper[4765]: I1203 20:56:41.004751 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 20:56:41 crc kubenswrapper[4765]: W1203 20:56:41.009622 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae73c00_5619_4157_9bf2_4996314616aa.slice/crio-77e7b6f745590c04b60cf4c3aef762084884fa1aa76f077ac6351a3750487afe WatchSource:0}: Error finding container 77e7b6f745590c04b60cf4c3aef762084884fa1aa76f077ac6351a3750487afe: Status 404 returned error can't find the container with id 77e7b6f745590c04b60cf4c3aef762084884fa1aa76f077ac6351a3750487afe Dec 03 20:56:41 crc kubenswrapper[4765]: I1203 20:56:41.109200 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerStarted","Data":"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718"} Dec 03 20:56:41 crc kubenswrapper[4765]: I1203 20:56:41.109567 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 20:56:41 crc kubenswrapper[4765]: I1203 20:56:41.111196 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fae73c00-5619-4157-9bf2-4996314616aa","Type":"ContainerStarted","Data":"77e7b6f745590c04b60cf4c3aef762084884fa1aa76f077ac6351a3750487afe"} Dec 03 20:56:41 crc kubenswrapper[4765]: I1203 20:56:41.155624 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.261096648 podStartE2EDuration="4.155605916s" podCreationTimestamp="2025-12-03 20:56:37 +0000 UTC" firstStartedPulling="2025-12-03 20:56:37.929710468 +0000 UTC m=+1095.860255619" lastFinishedPulling="2025-12-03 20:56:40.824219726 +0000 UTC m=+1098.754764887" observedRunningTime="2025-12-03 20:56:41.141355783 +0000 UTC m=+1099.071900974" watchObservedRunningTime="2025-12-03 20:56:41.155605916 +0000 UTC m=+1099.086151067" Dec 03 20:56:42 crc kubenswrapper[4765]: I1203 20:56:42.121894 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fae73c00-5619-4157-9bf2-4996314616aa","Type":"ContainerStarted","Data":"a76545f11b8c8c6fe65a5d868c521aa18fe516c7f197538a517ef224b1968da5"} Dec 03 20:56:42 crc kubenswrapper[4765]: I1203 20:56:42.144655 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.144633441 podStartE2EDuration="2.144633441s" podCreationTimestamp="2025-12-03 20:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:56:42.144437096 +0000 UTC m=+1100.074982257" watchObservedRunningTime="2025-12-03 20:56:42.144633441 +0000 UTC m=+1100.075178582" Dec 03 20:56:43 crc kubenswrapper[4765]: I1203 20:56:43.132192 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:50 crc kubenswrapper[4765]: I1203 20:56:50.561390 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.208615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rtsrx"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.211119 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.213851 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.214061 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.218434 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rtsrx"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.325564 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.325653 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.325708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.325957 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.399754 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.401112 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.404396 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.408665 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.436222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.436289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.436365 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.436501 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.451229 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.458686 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.460271 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.460812 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6\") pod \"nova-cell0-cell-mapping-rtsrx\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.534478 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.536086 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.537758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.537880 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbkh\" (UniqueName: \"kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.537959 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.538727 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.547897 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.583710 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641184 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641631 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbkh\" (UniqueName: \"kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641684 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641711 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641753 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641773 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmdj\" (UniqueName: \"kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.641820 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.652661 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.655473 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.666074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.667670 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.675859 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.685958 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.703773 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.704963 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.712495 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.722228 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbkh\" (UniqueName: \"kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh\") pod \"nova-scheduler-0\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.732979 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745035 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745089 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745110 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745139 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745163 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745178 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745210 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrb6\" (UniqueName: \"kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.745234 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmdj\" (UniqueName: \"kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.755941 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.759039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.759344 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.768884 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.775840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmdj\" (UniqueName: \"kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj\") pod \"nova-metadata-0\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.775910 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.784727 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.830845 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846229 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846262 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846402 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hrb6\" (UniqueName: \"kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846451 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.846514 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.847035 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.852887 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.853334 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.853683 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.869265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hrb6\" (UniqueName: \"kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6\") pod \"nova-api-0\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " pod="openstack/nova-api-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948449 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvvr\" (UniqueName: \"kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948592 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948611 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948643 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948672 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.948746 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.952497 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.955771 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:51 crc kubenswrapper[4765]: I1203 20:56:51.968018 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.050381 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.050464 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.050493 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.050534 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.050561 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvvr\" (UniqueName: \"kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.051428 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.051503 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.051505 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.052681 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.071034 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvvr\" (UniqueName: \"kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr\") pod \"dnsmasq-dns-8b8cf6657-txjf9\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.111876 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.146932 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.161736 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.183623 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rtsrx"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.227836 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg8mp"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.229006 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.233871 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.234044 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.237242 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rtsrx" event={"ID":"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0","Type":"ContainerStarted","Data":"a2be9257e83ee9e0af271fa78a94c45073c2ffc1b2cf946a777e46e91d989d83"} Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.282390 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg8mp"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.299588 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.368992 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.369111 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.369579 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.369954 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjnf\" (UniqueName: \"kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.405970 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.474883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjnf\" (UniqueName: \"kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.474950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.474997 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.475118 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.483078 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.483153 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.483404 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.491746 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjnf\" (UniqueName: \"kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf\") pod \"nova-cell1-conductor-db-sync-cg8mp\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.566747 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.695018 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.793378 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:56:52 crc kubenswrapper[4765]: I1203 20:56:52.886446 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:56:52 crc kubenswrapper[4765]: W1203 20:56:52.895389 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbb6de7b_6119_49ae_ad28_7d6a92195ab8.slice/crio-9cdfc316ec2416d8a80e10879aeca558d888c1a50e276d3ceef7715b407d848a WatchSource:0}: Error finding container 9cdfc316ec2416d8a80e10879aeca558d888c1a50e276d3ceef7715b407d848a: Status 404 returned error can't find the container with id 9cdfc316ec2416d8a80e10879aeca558d888c1a50e276d3ceef7715b407d848a Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.045896 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg8mp"] Dec 03 20:56:53 crc kubenswrapper[4765]: W1203 20:56:53.055255 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5979c42_e1df_4dd8_af4d_7e02e309bcc0.slice/crio-48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c WatchSource:0}: Error finding container 48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c: Status 404 returned error can't find the container with id 48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.248529 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerStarted","Data":"2c850adcac269beb1e518588e44a21b9602bfbfab6b6a92bd6b5bb96473cfed8"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.250422 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerStarted","Data":"9dd0af3ca14948e8c6cd424fc35cbbadb4ff2833d6892ae5654f5124070a21be"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.252097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0962251-5751-4fc8-a8c1-c7908e4d1fe9","Type":"ContainerStarted","Data":"103da01cbe3ac9df8f68ce2bac6b51c9dd22435dcbfb3b9494bc167b2c1fe8d6"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.254472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" event={"ID":"e5979c42-e1df-4dd8-af4d-7e02e309bcc0","Type":"ContainerStarted","Data":"48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.256595 4765 generic.go:334] "Generic (PLEG): container finished" podID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerID="f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15" exitCode=0 Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.256656 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" event={"ID":"fbb6de7b-6119-49ae-ad28-7d6a92195ab8","Type":"ContainerDied","Data":"f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.256677 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" event={"ID":"fbb6de7b-6119-49ae-ad28-7d6a92195ab8","Type":"ContainerStarted","Data":"9cdfc316ec2416d8a80e10879aeca558d888c1a50e276d3ceef7715b407d848a"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.260527 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rtsrx" event={"ID":"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0","Type":"ContainerStarted","Data":"0775648bb788239c52f7c759bd63ac9471f09976758422b8390bfb0ef802681c"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.263174 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d01ea351-8b90-4aa4-aeed-c914173d5389","Type":"ContainerStarted","Data":"c7f0e7627be50c4f19a07cb8f7362a91cea628224e09384d4f3e67938ee61a8f"} Dec 03 20:56:53 crc kubenswrapper[4765]: I1203 20:56:53.302647 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rtsrx" podStartSLOduration=2.302631941 podStartE2EDuration="2.302631941s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:56:53.299484647 +0000 UTC m=+1111.230029798" watchObservedRunningTime="2025-12-03 20:56:53.302631941 +0000 UTC m=+1111.233177092" Dec 03 20:56:54 crc kubenswrapper[4765]: I1203 20:56:54.277666 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" event={"ID":"e5979c42-e1df-4dd8-af4d-7e02e309bcc0","Type":"ContainerStarted","Data":"9954a5b1d8cdf831d69d05815ae849574bdfe30e8da9844a453861af2c2f2eb9"} Dec 03 20:56:54 crc kubenswrapper[4765]: I1203 20:56:54.285597 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" event={"ID":"fbb6de7b-6119-49ae-ad28-7d6a92195ab8","Type":"ContainerStarted","Data":"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652"} Dec 03 20:56:54 crc kubenswrapper[4765]: I1203 20:56:54.285645 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:56:54 crc kubenswrapper[4765]: I1203 20:56:54.322019 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" podStartSLOduration=2.322000134 podStartE2EDuration="2.322000134s" podCreationTimestamp="2025-12-03 20:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:56:54.298678906 +0000 UTC m=+1112.229224057" watchObservedRunningTime="2025-12-03 20:56:54.322000134 +0000 UTC m=+1112.252545305" Dec 03 20:56:54 crc kubenswrapper[4765]: I1203 20:56:54.323959 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" podStartSLOduration=3.323949737 podStartE2EDuration="3.323949737s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:56:54.316145596 +0000 UTC m=+1112.246690767" watchObservedRunningTime="2025-12-03 20:56:54.323949737 +0000 UTC m=+1112.254494898" Dec 03 20:56:55 crc kubenswrapper[4765]: I1203 20:56:55.279242 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:56:55 crc kubenswrapper[4765]: I1203 20:56:55.293694 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.311627 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerStarted","Data":"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288"} Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.320451 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0962251-5751-4fc8-a8c1-c7908e4d1fe9","Type":"ContainerStarted","Data":"4a86832328b2d1eb9bda7f8d184218bd45daca5d49c383c3c4d2106c3014b09c"} Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.320619 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4a86832328b2d1eb9bda7f8d184218bd45daca5d49c383c3c4d2106c3014b09c" gracePeriod=30 Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.326893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d01ea351-8b90-4aa4-aeed-c914173d5389","Type":"ContainerStarted","Data":"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52"} Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.331101 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerStarted","Data":"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1"} Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.341937 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.2226474449999998 podStartE2EDuration="5.341915801s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="2025-12-03 20:56:52.801975738 +0000 UTC m=+1110.732520889" lastFinishedPulling="2025-12-03 20:56:55.921244094 +0000 UTC m=+1113.851789245" observedRunningTime="2025-12-03 20:56:56.338487669 +0000 UTC m=+1114.269032820" watchObservedRunningTime="2025-12-03 20:56:56.341915801 +0000 UTC m=+1114.272460952" Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.372112 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.781728952 podStartE2EDuration="5.372088964s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="2025-12-03 20:56:52.326929416 +0000 UTC m=+1110.257474567" lastFinishedPulling="2025-12-03 20:56:55.917289428 +0000 UTC m=+1113.847834579" observedRunningTime="2025-12-03 20:56:56.358989511 +0000 UTC m=+1114.289534662" watchObservedRunningTime="2025-12-03 20:56:56.372088964 +0000 UTC m=+1114.302634115" Dec 03 20:56:56 crc kubenswrapper[4765]: I1203 20:56:56.734019 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.147488 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.343450 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerStarted","Data":"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10"} Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.345401 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerStarted","Data":"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d"} Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.345596 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-log" containerID="cri-o://605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" gracePeriod=30 Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.345665 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-metadata" containerID="cri-o://84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" gracePeriod=30 Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.377732 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.194290732 podStartE2EDuration="6.377705296s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="2025-12-03 20:56:52.740016749 +0000 UTC m=+1110.670561900" lastFinishedPulling="2025-12-03 20:56:55.923431293 +0000 UTC m=+1113.853976464" observedRunningTime="2025-12-03 20:56:57.364027937 +0000 UTC m=+1115.294573108" watchObservedRunningTime="2025-12-03 20:56:57.377705296 +0000 UTC m=+1115.308250487" Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.384773 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.862981152 podStartE2EDuration="6.384752205s" podCreationTimestamp="2025-12-03 20:56:51 +0000 UTC" firstStartedPulling="2025-12-03 20:56:52.397849627 +0000 UTC m=+1110.328394778" lastFinishedPulling="2025-12-03 20:56:55.91962068 +0000 UTC m=+1113.850165831" observedRunningTime="2025-12-03 20:56:57.384279833 +0000 UTC m=+1115.314825024" watchObservedRunningTime="2025-12-03 20:56:57.384752205 +0000 UTC m=+1115.315297356" Dec 03 20:56:57 crc kubenswrapper[4765]: I1203 20:56:57.950023 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.088180 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs\") pod \"dd779c1b-bf55-4c6a-8274-0934604141a2\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.088255 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle\") pod \"dd779c1b-bf55-4c6a-8274-0934604141a2\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.088408 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmdj\" (UniqueName: \"kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj\") pod \"dd779c1b-bf55-4c6a-8274-0934604141a2\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.088449 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data\") pod \"dd779c1b-bf55-4c6a-8274-0934604141a2\" (UID: \"dd779c1b-bf55-4c6a-8274-0934604141a2\") " Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.088799 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs" (OuterVolumeSpecName: "logs") pod "dd779c1b-bf55-4c6a-8274-0934604141a2" (UID: "dd779c1b-bf55-4c6a-8274-0934604141a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.089000 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd779c1b-bf55-4c6a-8274-0934604141a2-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.103542 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj" (OuterVolumeSpecName: "kube-api-access-vkmdj") pod "dd779c1b-bf55-4c6a-8274-0934604141a2" (UID: "dd779c1b-bf55-4c6a-8274-0934604141a2"). InnerVolumeSpecName "kube-api-access-vkmdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.115771 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data" (OuterVolumeSpecName: "config-data") pod "dd779c1b-bf55-4c6a-8274-0934604141a2" (UID: "dd779c1b-bf55-4c6a-8274-0934604141a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.128038 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd779c1b-bf55-4c6a-8274-0934604141a2" (UID: "dd779c1b-bf55-4c6a-8274-0934604141a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.190252 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmdj\" (UniqueName: \"kubernetes.io/projected/dd779c1b-bf55-4c6a-8274-0934604141a2-kube-api-access-vkmdj\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.190283 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.190309 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd779c1b-bf55-4c6a-8274-0934604141a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.365873 4765 generic.go:334] "Generic (PLEG): container finished" podID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerID="84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" exitCode=0 Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.365909 4765 generic.go:334] "Generic (PLEG): container finished" podID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerID="605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" exitCode=143 Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.366220 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.395430 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerDied","Data":"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d"} Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.395484 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerDied","Data":"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288"} Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.395497 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd779c1b-bf55-4c6a-8274-0934604141a2","Type":"ContainerDied","Data":"9dd0af3ca14948e8c6cd424fc35cbbadb4ff2833d6892ae5654f5124070a21be"} Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.395520 4765 scope.go:117] "RemoveContainer" containerID="84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.432408 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.441469 4765 scope.go:117] "RemoveContainer" containerID="605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.447140 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.458614 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:58 crc kubenswrapper[4765]: E1203 20:56:58.459260 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-log" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.459289 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-log" Dec 03 20:56:58 crc kubenswrapper[4765]: E1203 20:56:58.459343 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-metadata" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.459359 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-metadata" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.459642 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-log" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.459673 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" containerName="nova-metadata-metadata" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.460915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.465095 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.465847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.467334 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.482609 4765 scope.go:117] "RemoveContainer" containerID="84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" Dec 03 20:56:58 crc kubenswrapper[4765]: E1203 20:56:58.483223 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d\": container with ID starting with 84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d not found: ID does not exist" containerID="84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.483274 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d"} err="failed to get container status \"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d\": rpc error: code = NotFound desc = could not find container \"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d\": container with ID starting with 84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d not found: ID does not exist" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.483318 4765 scope.go:117] "RemoveContainer" containerID="605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" Dec 03 20:56:58 crc kubenswrapper[4765]: E1203 20:56:58.487066 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288\": container with ID starting with 605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288 not found: ID does not exist" containerID="605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.487121 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288"} err="failed to get container status \"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288\": rpc error: code = NotFound desc = could not find container \"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288\": container with ID starting with 605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288 not found: ID does not exist" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.487156 4765 scope.go:117] "RemoveContainer" containerID="84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.487741 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d"} err="failed to get container status \"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d\": rpc error: code = NotFound desc = could not find container \"84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d\": container with ID starting with 84bad4f2ba7897efc531079b2ae3f97977bd41fa3adb0873f45d9f83519a8d4d not found: ID does not exist" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.487801 4765 scope.go:117] "RemoveContainer" containerID="605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.488211 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288"} err="failed to get container status \"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288\": rpc error: code = NotFound desc = could not find container \"605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288\": container with ID starting with 605698c60f74f4f9d4f26484469c0c9f9407294f00944d6d9053cf0da29fd288 not found: ID does not exist" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.599056 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.599330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.599516 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.599641 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.599801 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkcns\" (UniqueName: \"kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701373 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701466 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701493 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkcns\" (UniqueName: \"kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701551 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.701915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.706569 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.707946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.708180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.736441 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkcns\" (UniqueName: \"kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns\") pod \"nova-metadata-0\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " pod="openstack/nova-metadata-0" Dec 03 20:56:58 crc kubenswrapper[4765]: I1203 20:56:58.794635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:56:59 crc kubenswrapper[4765]: I1203 20:56:59.272280 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:56:59 crc kubenswrapper[4765]: I1203 20:56:59.376107 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerStarted","Data":"d34bacb524a42dbf527fd5d0dde47e83a7022212f6847ae09300dd81147b640d"} Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.384136 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd779c1b-bf55-4c6a-8274-0934604141a2" path="/var/lib/kubelet/pods/dd779c1b-bf55-4c6a-8274-0934604141a2/volumes" Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.389015 4765 generic.go:334] "Generic (PLEG): container finished" podID="1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" containerID="0775648bb788239c52f7c759bd63ac9471f09976758422b8390bfb0ef802681c" exitCode=0 Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.389122 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rtsrx" event={"ID":"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0","Type":"ContainerDied","Data":"0775648bb788239c52f7c759bd63ac9471f09976758422b8390bfb0ef802681c"} Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.397439 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerStarted","Data":"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8"} Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.397504 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerStarted","Data":"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70"} Dec 03 20:57:00 crc kubenswrapper[4765]: I1203 20:57:00.462618 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.462589244 podStartE2EDuration="2.462589244s" podCreationTimestamp="2025-12-03 20:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:00.447514418 +0000 UTC m=+1118.378059569" watchObservedRunningTime="2025-12-03 20:57:00.462589244 +0000 UTC m=+1118.393134425" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.411485 4765 generic.go:334] "Generic (PLEG): container finished" podID="e5979c42-e1df-4dd8-af4d-7e02e309bcc0" containerID="9954a5b1d8cdf831d69d05815ae849574bdfe30e8da9844a453861af2c2f2eb9" exitCode=0 Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.411574 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" event={"ID":"e5979c42-e1df-4dd8-af4d-7e02e309bcc0","Type":"ContainerDied","Data":"9954a5b1d8cdf831d69d05815ae849574bdfe30e8da9844a453861af2c2f2eb9"} Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.734811 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.780666 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.859622 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.962061 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data\") pod \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.962397 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6\") pod \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.962504 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle\") pod \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.962591 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts\") pod \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\" (UID: \"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0\") " Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.968505 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6" (OuterVolumeSpecName: "kube-api-access-jltl6") pod "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" (UID: "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0"). InnerVolumeSpecName "kube-api-access-jltl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.968810 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts" (OuterVolumeSpecName: "scripts") pod "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" (UID: "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.995536 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" (UID: "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:01 crc kubenswrapper[4765]: I1203 20:57:01.995653 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data" (OuterVolumeSpecName: "config-data") pod "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" (UID: "1de4d2e6-ab91-4e44-a83d-ba6f9f384be0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.064979 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-kube-api-access-jltl6\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.065010 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.065020 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.065029 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.112823 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.113434 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.164541 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.247474 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.247822 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="dnsmasq-dns" containerID="cri-o://c36cdbbd761fdc829e33b578ada90aad9c24eec2d5e372e6f667abe0d4c8402c" gracePeriod=10 Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.439811 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rtsrx" event={"ID":"1de4d2e6-ab91-4e44-a83d-ba6f9f384be0","Type":"ContainerDied","Data":"a2be9257e83ee9e0af271fa78a94c45073c2ffc1b2cf946a777e46e91d989d83"} Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.440129 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2be9257e83ee9e0af271fa78a94c45073c2ffc1b2cf946a777e46e91d989d83" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.440200 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rtsrx" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.446085 4765 generic.go:334] "Generic (PLEG): container finished" podID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerID="c36cdbbd761fdc829e33b578ada90aad9c24eec2d5e372e6f667abe0d4c8402c" exitCode=0 Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.448134 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" event={"ID":"42c8be96-b365-46a2-8069-f4ccb5c9fa77","Type":"ContainerDied","Data":"c36cdbbd761fdc829e33b578ada90aad9c24eec2d5e372e6f667abe0d4c8402c"} Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.484120 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.655009 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.695104 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.695433 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-log" containerID="cri-o://ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" gracePeriod=30 Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.695509 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-metadata" containerID="cri-o://38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" gracePeriod=30 Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.696038 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.776900 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb\") pod \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.777405 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc\") pod \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.777606 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxlgv\" (UniqueName: \"kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv\") pod \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.777705 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb\") pod \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.778442 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config\") pod \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\" (UID: \"42c8be96-b365-46a2-8069-f4ccb5c9fa77\") " Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.782008 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv" (OuterVolumeSpecName: "kube-api-access-hxlgv") pod "42c8be96-b365-46a2-8069-f4ccb5c9fa77" (UID: "42c8be96-b365-46a2-8069-f4ccb5c9fa77"). InnerVolumeSpecName "kube-api-access-hxlgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.852231 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42c8be96-b365-46a2-8069-f4ccb5c9fa77" (UID: "42c8be96-b365-46a2-8069-f4ccb5c9fa77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.860982 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42c8be96-b365-46a2-8069-f4ccb5c9fa77" (UID: "42c8be96-b365-46a2-8069-f4ccb5c9fa77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.864898 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config" (OuterVolumeSpecName: "config") pod "42c8be96-b365-46a2-8069-f4ccb5c9fa77" (UID: "42c8be96-b365-46a2-8069-f4ccb5c9fa77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.868725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42c8be96-b365-46a2-8069-f4ccb5c9fa77" (UID: "42c8be96-b365-46a2-8069-f4ccb5c9fa77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.886036 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.886380 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.886395 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxlgv\" (UniqueName: \"kubernetes.io/projected/42c8be96-b365-46a2-8069-f4ccb5c9fa77-kube-api-access-hxlgv\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.886410 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.886425 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c8be96-b365-46a2-8069-f4ccb5c9fa77-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:02 crc kubenswrapper[4765]: I1203 20:57:02.994677 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.021425 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.192761 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts\") pod \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.193070 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle\") pod \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.193136 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data\") pod \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.193207 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdjnf\" (UniqueName: \"kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf\") pod \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\" (UID: \"e5979c42-e1df-4dd8-af4d-7e02e309bcc0\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.199765 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.200036 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.208517 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf" (OuterVolumeSpecName: "kube-api-access-wdjnf") pod "e5979c42-e1df-4dd8-af4d-7e02e309bcc0" (UID: "e5979c42-e1df-4dd8-af4d-7e02e309bcc0"). InnerVolumeSpecName "kube-api-access-wdjnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.212915 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts" (OuterVolumeSpecName: "scripts") pod "e5979c42-e1df-4dd8-af4d-7e02e309bcc0" (UID: "e5979c42-e1df-4dd8-af4d-7e02e309bcc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.241786 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5979c42-e1df-4dd8-af4d-7e02e309bcc0" (UID: "e5979c42-e1df-4dd8-af4d-7e02e309bcc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.253882 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data" (OuterVolumeSpecName: "config-data") pod "e5979c42-e1df-4dd8-af4d-7e02e309bcc0" (UID: "e5979c42-e1df-4dd8-af4d-7e02e309bcc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.260262 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.296095 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.296312 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.296401 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdjnf\" (UniqueName: \"kubernetes.io/projected/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-kube-api-access-wdjnf\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.296478 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5979c42-e1df-4dd8-af4d-7e02e309bcc0-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397291 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs\") pod \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397482 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle\") pod \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397549 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs\") pod \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397580 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkcns\" (UniqueName: \"kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns\") pod \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397620 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data\") pod \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\" (UID: \"e0e8daea-38ca-4c20-8d31-c587de6a85e3\") " Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.397889 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs" (OuterVolumeSpecName: "logs") pod "e0e8daea-38ca-4c20-8d31-c587de6a85e3" (UID: "e0e8daea-38ca-4c20-8d31-c587de6a85e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.400706 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns" (OuterVolumeSpecName: "kube-api-access-jkcns") pod "e0e8daea-38ca-4c20-8d31-c587de6a85e3" (UID: "e0e8daea-38ca-4c20-8d31-c587de6a85e3"). InnerVolumeSpecName "kube-api-access-jkcns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.419417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data" (OuterVolumeSpecName: "config-data") pod "e0e8daea-38ca-4c20-8d31-c587de6a85e3" (UID: "e0e8daea-38ca-4c20-8d31-c587de6a85e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.436581 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0e8daea-38ca-4c20-8d31-c587de6a85e3" (UID: "e0e8daea-38ca-4c20-8d31-c587de6a85e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.458207 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e0e8daea-38ca-4c20-8d31-c587de6a85e3" (UID: "e0e8daea-38ca-4c20-8d31-c587de6a85e3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477599 4765 generic.go:334] "Generic (PLEG): container finished" podID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerID="38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" exitCode=0 Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477633 4765 generic.go:334] "Generic (PLEG): container finished" podID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerID="ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" exitCode=143 Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477720 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477816 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerDied","Data":"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8"} Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477870 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerDied","Data":"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70"} Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477881 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0e8daea-38ca-4c20-8d31-c587de6a85e3","Type":"ContainerDied","Data":"d34bacb524a42dbf527fd5d0dde47e83a7022212f6847ae09300dd81147b640d"} Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.477898 4765 scope.go:117] "RemoveContainer" containerID="38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.482939 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.482929 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-fnw5n" event={"ID":"42c8be96-b365-46a2-8069-f4ccb5c9fa77","Type":"ContainerDied","Data":"2250ef580fc6f98cabbe44a4c3af6a66237fec1f887de9da433363708fd0b0ec"} Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.484806 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-log" containerID="cri-o://3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1" gracePeriod=30 Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.485114 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.491911 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg8mp" event={"ID":"e5979c42-e1df-4dd8-af4d-7e02e309bcc0","Type":"ContainerDied","Data":"48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c"} Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.492001 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c45e8e540cb398b3b6970201ff076d8cedb3d4382e97c0c8a65be4e2e87c1c" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.492167 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-api" containerID="cri-o://5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10" gracePeriod=30 Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.500382 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.500507 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8daea-38ca-4c20-8d31-c587de6a85e3-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.500584 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkcns\" (UniqueName: \"kubernetes.io/projected/e0e8daea-38ca-4c20-8d31-c587de6a85e3-kube-api-access-jkcns\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.500648 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.500742 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e8daea-38ca-4c20-8d31-c587de6a85e3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.511448 4765 scope.go:117] "RemoveContainer" containerID="ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.527945 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528470 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-metadata" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528498 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-metadata" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528518 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="dnsmasq-dns" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528525 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="dnsmasq-dns" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528535 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" containerName="nova-manage" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528541 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" containerName="nova-manage" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528557 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="init" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528563 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="init" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528584 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-log" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528590 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-log" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.528602 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5979c42-e1df-4dd8-af4d-7e02e309bcc0" containerName="nova-cell1-conductor-db-sync" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528609 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5979c42-e1df-4dd8-af4d-7e02e309bcc0" containerName="nova-cell1-conductor-db-sync" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528764 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5979c42-e1df-4dd8-af4d-7e02e309bcc0" containerName="nova-cell1-conductor-db-sync" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528777 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" containerName="dnsmasq-dns" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528789 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" containerName="nova-manage" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528799 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-log" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.528805 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" containerName="nova-metadata-metadata" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.529756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.534936 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.540351 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.548787 4765 scope.go:117] "RemoveContainer" containerID="38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.549096 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8\": container with ID starting with 38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8 not found: ID does not exist" containerID="38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549122 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8"} err="failed to get container status \"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8\": rpc error: code = NotFound desc = could not find container \"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8\": container with ID starting with 38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8 not found: ID does not exist" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549142 4765 scope.go:117] "RemoveContainer" containerID="ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" Dec 03 20:57:03 crc kubenswrapper[4765]: E1203 20:57:03.549359 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70\": container with ID starting with ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70 not found: ID does not exist" containerID="ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549377 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70"} err="failed to get container status \"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70\": rpc error: code = NotFound desc = could not find container \"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70\": container with ID starting with ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70 not found: ID does not exist" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549390 4765 scope.go:117] "RemoveContainer" containerID="38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549618 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8"} err="failed to get container status \"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8\": rpc error: code = NotFound desc = could not find container \"38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8\": container with ID starting with 38c38b6bfb45afef4113f032d3570eb638889bbac1909f0cf3d07f265ed50fd8 not found: ID does not exist" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.549637 4765 scope.go:117] "RemoveContainer" containerID="ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.550043 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70"} err="failed to get container status \"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70\": rpc error: code = NotFound desc = could not find container \"ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70\": container with ID starting with ce7edad9943eec1669a02ebcdd4e558059951a519deb9bc825f8371e7b0eee70 not found: ID does not exist" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.550093 4765 scope.go:117] "RemoveContainer" containerID="c36cdbbd761fdc829e33b578ada90aad9c24eec2d5e372e6f667abe0d4c8402c" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.559399 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.571200 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-fnw5n"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.605359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.605817 4765 scope.go:117] "RemoveContainer" containerID="228f2c3a0cf75f72e419fbb0e3f09f66f277cba59a9a2f77b7c8e1a7dc4a4890" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.606326 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn84h\" (UniqueName: \"kubernetes.io/projected/7c9a4172-479b-4188-9264-208492b2be91-kube-api-access-xn84h\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.606374 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.606429 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.619079 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.631372 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.633320 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.638612 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.639815 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.640137 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.708240 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.708772 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.708896 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn84h\" (UniqueName: \"kubernetes.io/projected/7c9a4172-479b-4188-9264-208492b2be91-kube-api-access-xn84h\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.709015 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.709112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.709218 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.709354 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.709466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28kpv\" (UniqueName: \"kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.714102 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.715151 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9a4172-479b-4188-9264-208492b2be91-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.728172 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn84h\" (UniqueName: \"kubernetes.io/projected/7c9a4172-479b-4188-9264-208492b2be91-kube-api-access-xn84h\") pod \"nova-cell1-conductor-0\" (UID: \"7c9a4172-479b-4188-9264-208492b2be91\") " pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.810733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.810824 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.810867 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28kpv\" (UniqueName: \"kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.810890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.810924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.811402 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.814654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.814840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.816076 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.829077 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28kpv\" (UniqueName: \"kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv\") pod \"nova-metadata-0\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " pod="openstack/nova-metadata-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.852141 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:03 crc kubenswrapper[4765]: I1203 20:57:03.980271 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.323268 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 20:57:04 crc kubenswrapper[4765]: W1203 20:57:04.333857 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c9a4172_479b_4188_9264_208492b2be91.slice/crio-5d34697718653f312528b57453cf7df23c11798744cbd21b18def76f1d87faa8 WatchSource:0}: Error finding container 5d34697718653f312528b57453cf7df23c11798744cbd21b18def76f1d87faa8: Status 404 returned error can't find the container with id 5d34697718653f312528b57453cf7df23c11798744cbd21b18def76f1d87faa8 Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.370974 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c8be96-b365-46a2-8069-f4ccb5c9fa77" path="/var/lib/kubelet/pods/42c8be96-b365-46a2-8069-f4ccb5c9fa77/volumes" Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.372036 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e8daea-38ca-4c20-8d31-c587de6a85e3" path="/var/lib/kubelet/pods/e0e8daea-38ca-4c20-8d31-c587de6a85e3/volumes" Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.441375 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:04 crc kubenswrapper[4765]: W1203 20:57:04.447259 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc34be1f_8c78_47ab_9adf_019219027643.slice/crio-a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa WatchSource:0}: Error finding container a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa: Status 404 returned error can't find the container with id a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.497478 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerStarted","Data":"a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa"} Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.498794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7c9a4172-479b-4188-9264-208492b2be91","Type":"ContainerStarted","Data":"5d34697718653f312528b57453cf7df23c11798744cbd21b18def76f1d87faa8"} Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.505484 4765 generic.go:334] "Generic (PLEG): container finished" podID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerID="3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1" exitCode=143 Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.505569 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerDied","Data":"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1"} Dec 03 20:57:04 crc kubenswrapper[4765]: I1203 20:57:04.507464 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerName="nova-scheduler-scheduler" containerID="cri-o://eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" gracePeriod=30 Dec 03 20:57:05 crc kubenswrapper[4765]: I1203 20:57:05.521678 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7c9a4172-479b-4188-9264-208492b2be91","Type":"ContainerStarted","Data":"2654e20928340e8dd4992da58d459e81fbb65cb744525b0ddc3cb3135ced8259"} Dec 03 20:57:05 crc kubenswrapper[4765]: I1203 20:57:05.522362 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:05 crc kubenswrapper[4765]: I1203 20:57:05.535189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerStarted","Data":"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6"} Dec 03 20:57:05 crc kubenswrapper[4765]: I1203 20:57:05.545824 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.545802838 podStartE2EDuration="2.545802838s" podCreationTimestamp="2025-12-03 20:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:05.536378603 +0000 UTC m=+1123.466923774" watchObservedRunningTime="2025-12-03 20:57:05.545802838 +0000 UTC m=+1123.476348009" Dec 03 20:57:06 crc kubenswrapper[4765]: I1203 20:57:06.548747 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerStarted","Data":"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99"} Dec 03 20:57:06 crc kubenswrapper[4765]: I1203 20:57:06.569332 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.569312682 podStartE2EDuration="3.569312682s" podCreationTimestamp="2025-12-03 20:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:06.566379263 +0000 UTC m=+1124.496924434" watchObservedRunningTime="2025-12-03 20:57:06.569312682 +0000 UTC m=+1124.499857833" Dec 03 20:57:06 crc kubenswrapper[4765]: E1203 20:57:06.737583 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 20:57:06 crc kubenswrapper[4765]: E1203 20:57:06.739370 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 20:57:06 crc kubenswrapper[4765]: E1203 20:57:06.740591 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 20:57:06 crc kubenswrapper[4765]: E1203 20:57:06.740625 4765 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerName="nova-scheduler-scheduler" Dec 03 20:57:07 crc kubenswrapper[4765]: I1203 20:57:07.485747 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 20:57:07 crc kubenswrapper[4765]: I1203 20:57:07.957397 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.106546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clbkh\" (UniqueName: \"kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh\") pod \"d01ea351-8b90-4aa4-aeed-c914173d5389\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.106976 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data\") pod \"d01ea351-8b90-4aa4-aeed-c914173d5389\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.107085 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle\") pod \"d01ea351-8b90-4aa4-aeed-c914173d5389\" (UID: \"d01ea351-8b90-4aa4-aeed-c914173d5389\") " Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.118627 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh" (OuterVolumeSpecName: "kube-api-access-clbkh") pod "d01ea351-8b90-4aa4-aeed-c914173d5389" (UID: "d01ea351-8b90-4aa4-aeed-c914173d5389"). InnerVolumeSpecName "kube-api-access-clbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.142522 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data" (OuterVolumeSpecName: "config-data") pod "d01ea351-8b90-4aa4-aeed-c914173d5389" (UID: "d01ea351-8b90-4aa4-aeed-c914173d5389"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.144894 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d01ea351-8b90-4aa4-aeed-c914173d5389" (UID: "d01ea351-8b90-4aa4-aeed-c914173d5389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.209182 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.209228 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01ea351-8b90-4aa4-aeed-c914173d5389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.209243 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clbkh\" (UniqueName: \"kubernetes.io/projected/d01ea351-8b90-4aa4-aeed-c914173d5389-kube-api-access-clbkh\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.571255 4765 generic.go:334] "Generic (PLEG): container finished" podID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" exitCode=0 Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.571314 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d01ea351-8b90-4aa4-aeed-c914173d5389","Type":"ContainerDied","Data":"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52"} Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.571339 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.571599 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d01ea351-8b90-4aa4-aeed-c914173d5389","Type":"ContainerDied","Data":"c7f0e7627be50c4f19a07cb8f7362a91cea628224e09384d4f3e67938ee61a8f"} Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.571694 4765 scope.go:117] "RemoveContainer" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.598895 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.621083 4765 scope.go:117] "RemoveContainer" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" Dec 03 20:57:08 crc kubenswrapper[4765]: E1203 20:57:08.622720 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52\": container with ID starting with eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52 not found: ID does not exist" containerID="eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.622767 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52"} err="failed to get container status \"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52\": rpc error: code = NotFound desc = could not find container \"eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52\": container with ID starting with eb3a1f5ba9b59db10c26a26cad46d37bafe9838196ffc057b868420aacca3c52 not found: ID does not exist" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.623504 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.637545 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:08 crc kubenswrapper[4765]: E1203 20:57:08.638025 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerName="nova-scheduler-scheduler" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.638049 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerName="nova-scheduler-scheduler" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.638254 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" containerName="nova-scheduler-scheduler" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.639138 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.650841 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.671623 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.820160 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzwvr\" (UniqueName: \"kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.820233 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.820260 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.921836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.922027 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzwvr\" (UniqueName: \"kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.922093 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.929253 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.929456 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.945711 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzwvr\" (UniqueName: \"kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr\") pod \"nova-scheduler-0\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.980713 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.980761 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 20:57:08 crc kubenswrapper[4765]: I1203 20:57:08.987101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.434486 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.498765 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.582921 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59204443-4c2b-45aa-97b4-75d33207cc52","Type":"ContainerStarted","Data":"a901ff1e57fc0e54a7de27363e76037c9eb3495e2dd86d3c97cb42ab5d43aed0"} Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.585332 4765 generic.go:334] "Generic (PLEG): container finished" podID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerID="5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10" exitCode=0 Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.585374 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.585384 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerDied","Data":"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10"} Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.585422 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6","Type":"ContainerDied","Data":"2c850adcac269beb1e518588e44a21b9602bfbfab6b6a92bd6b5bb96473cfed8"} Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.585442 4765 scope.go:117] "RemoveContainer" containerID="5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.611648 4765 scope.go:117] "RemoveContainer" containerID="3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.636463 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs\") pod \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.636638 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data\") pod \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.636669 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle\") pod \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.636714 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hrb6\" (UniqueName: \"kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6\") pod \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\" (UID: \"9fd84a16-243b-4b3c-a7c8-e52ab58c58a6\") " Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.637268 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs" (OuterVolumeSpecName: "logs") pod "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" (UID: "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.637940 4765 scope.go:117] "RemoveContainer" containerID="5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10" Dec 03 20:57:09 crc kubenswrapper[4765]: E1203 20:57:09.638394 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10\": container with ID starting with 5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10 not found: ID does not exist" containerID="5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.638434 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10"} err="failed to get container status \"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10\": rpc error: code = NotFound desc = could not find container \"5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10\": container with ID starting with 5a7cc605aaca27623fb19d66323f58661f64d5d7039488370771ebe38a77cc10 not found: ID does not exist" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.638455 4765 scope.go:117] "RemoveContainer" containerID="3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1" Dec 03 20:57:09 crc kubenswrapper[4765]: E1203 20:57:09.638977 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1\": container with ID starting with 3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1 not found: ID does not exist" containerID="3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.639028 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1"} err="failed to get container status \"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1\": rpc error: code = NotFound desc = could not find container \"3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1\": container with ID starting with 3132543e3860ef69f061bff50b101d05dccaace299cad0d4f1b7337b54642bc1 not found: ID does not exist" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.645399 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6" (OuterVolumeSpecName: "kube-api-access-9hrb6") pod "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" (UID: "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6"). InnerVolumeSpecName "kube-api-access-9hrb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.661803 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data" (OuterVolumeSpecName: "config-data") pod "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" (UID: "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.663723 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" (UID: "9fd84a16-243b-4b3c-a7c8-e52ab58c58a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.739569 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.739625 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.739642 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.739652 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hrb6\" (UniqueName: \"kubernetes.io/projected/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6-kube-api-access-9hrb6\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.946157 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:09 crc kubenswrapper[4765]: I1203 20:57:09.953778 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.034834 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:10 crc kubenswrapper[4765]: E1203 20:57:10.035220 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-api" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.035233 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-api" Dec 03 20:57:10 crc kubenswrapper[4765]: E1203 20:57:10.035248 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-log" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.035254 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-log" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.035565 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-log" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.035577 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" containerName="nova-api-api" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.036876 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.042622 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.053808 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.145420 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.145484 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqjk\" (UniqueName: \"kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.145540 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.145627 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.246698 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.247066 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.247127 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqjk\" (UniqueName: \"kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.247194 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.247252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.251154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.252576 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.265105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqjk\" (UniqueName: \"kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk\") pod \"nova-api-0\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.361336 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.372279 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd84a16-243b-4b3c-a7c8-e52ab58c58a6" path="/var/lib/kubelet/pods/9fd84a16-243b-4b3c-a7c8-e52ab58c58a6/volumes" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.373128 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01ea351-8b90-4aa4-aeed-c914173d5389" path="/var/lib/kubelet/pods/d01ea351-8b90-4aa4-aeed-c914173d5389/volumes" Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.597086 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59204443-4c2b-45aa-97b4-75d33207cc52","Type":"ContainerStarted","Data":"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4"} Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.813754 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.813731949 podStartE2EDuration="2.813731949s" podCreationTimestamp="2025-12-03 20:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:10.621760816 +0000 UTC m=+1128.552306007" watchObservedRunningTime="2025-12-03 20:57:10.813731949 +0000 UTC m=+1128.744277110" Dec 03 20:57:10 crc kubenswrapper[4765]: W1203 20:57:10.826143 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3be613e1_33fd_4e42_9834_d35e9cba181e.slice/crio-1a3840d08e38368cec85fe464efc93bc36f6e281b7e049193090f3c709f5d363 WatchSource:0}: Error finding container 1a3840d08e38368cec85fe464efc93bc36f6e281b7e049193090f3c709f5d363: Status 404 returned error can't find the container with id 1a3840d08e38368cec85fe464efc93bc36f6e281b7e049193090f3c709f5d363 Dec 03 20:57:10 crc kubenswrapper[4765]: I1203 20:57:10.830828 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:11 crc kubenswrapper[4765]: I1203 20:57:11.615700 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerStarted","Data":"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477"} Dec 03 20:57:11 crc kubenswrapper[4765]: I1203 20:57:11.616055 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerStarted","Data":"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac"} Dec 03 20:57:11 crc kubenswrapper[4765]: I1203 20:57:11.616069 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerStarted","Data":"1a3840d08e38368cec85fe464efc93bc36f6e281b7e049193090f3c709f5d363"} Dec 03 20:57:11 crc kubenswrapper[4765]: I1203 20:57:11.646255 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.646227715 podStartE2EDuration="2.646227715s" podCreationTimestamp="2025-12-03 20:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:11.637350446 +0000 UTC m=+1129.567895617" watchObservedRunningTime="2025-12-03 20:57:11.646227715 +0000 UTC m=+1129.576772886" Dec 03 20:57:13 crc kubenswrapper[4765]: I1203 20:57:13.898640 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 03 20:57:13 crc kubenswrapper[4765]: I1203 20:57:13.981482 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 20:57:13 crc kubenswrapper[4765]: I1203 20:57:13.981558 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 20:57:13 crc kubenswrapper[4765]: I1203 20:57:13.987879 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 20:57:14 crc kubenswrapper[4765]: I1203 20:57:14.996686 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:14 crc kubenswrapper[4765]: I1203 20:57:14.996750 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:18 crc kubenswrapper[4765]: I1203 20:57:18.987794 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 20:57:19 crc kubenswrapper[4765]: I1203 20:57:19.034211 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 20:57:19 crc kubenswrapper[4765]: I1203 20:57:19.740202 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 20:57:20 crc kubenswrapper[4765]: I1203 20:57:20.372092 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:20 crc kubenswrapper[4765]: I1203 20:57:20.372130 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:21 crc kubenswrapper[4765]: I1203 20:57:21.444547 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:21 crc kubenswrapper[4765]: I1203 20:57:21.444547 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:23 crc kubenswrapper[4765]: I1203 20:57:23.989930 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 20:57:23 crc kubenswrapper[4765]: I1203 20:57:23.991359 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 20:57:24 crc kubenswrapper[4765]: I1203 20:57:24.001017 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 20:57:24 crc kubenswrapper[4765]: I1203 20:57:24.001089 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.785212 4765 generic.go:334] "Generic (PLEG): container finished" podID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" containerID="4a86832328b2d1eb9bda7f8d184218bd45daca5d49c383c3c4d2106c3014b09c" exitCode=137 Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.785247 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0962251-5751-4fc8-a8c1-c7908e4d1fe9","Type":"ContainerDied","Data":"4a86832328b2d1eb9bda7f8d184218bd45daca5d49c383c3c4d2106c3014b09c"} Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.785921 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0962251-5751-4fc8-a8c1-c7908e4d1fe9","Type":"ContainerDied","Data":"103da01cbe3ac9df8f68ce2bac6b51c9dd22435dcbfb3b9494bc167b2c1fe8d6"} Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.785937 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103da01cbe3ac9df8f68ce2bac6b51c9dd22435dcbfb3b9494bc167b2c1fe8d6" Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.839225 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.985489 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data\") pod \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.985623 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc\") pod \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.985671 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle\") pod \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\" (UID: \"e0962251-5751-4fc8-a8c1-c7908e4d1fe9\") " Dec 03 20:57:26 crc kubenswrapper[4765]: I1203 20:57:26.991096 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc" (OuterVolumeSpecName: "kube-api-access-gxdsc") pod "e0962251-5751-4fc8-a8c1-c7908e4d1fe9" (UID: "e0962251-5751-4fc8-a8c1-c7908e4d1fe9"). InnerVolumeSpecName "kube-api-access-gxdsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.012438 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data" (OuterVolumeSpecName: "config-data") pod "e0962251-5751-4fc8-a8c1-c7908e4d1fe9" (UID: "e0962251-5751-4fc8-a8c1-c7908e4d1fe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.013386 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0962251-5751-4fc8-a8c1-c7908e4d1fe9" (UID: "e0962251-5751-4fc8-a8c1-c7908e4d1fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.088180 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.088225 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdsc\" (UniqueName: \"kubernetes.io/projected/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-kube-api-access-gxdsc\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.088267 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0962251-5751-4fc8-a8c1-c7908e4d1fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.795898 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.849116 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.871532 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.884066 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:57:27 crc kubenswrapper[4765]: E1203 20:57:27.884878 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.885040 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.885417 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.886541 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.890454 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.890720 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.890919 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 03 20:57:27 crc kubenswrapper[4765]: I1203 20:57:27.907453 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.010019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47q2\" (UniqueName: \"kubernetes.io/projected/c40d979f-5978-45a1-9b88-b4587eb142c2-kube-api-access-f47q2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.010082 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.010275 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.010437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.010570 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.112597 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47q2\" (UniqueName: \"kubernetes.io/projected/c40d979f-5978-45a1-9b88-b4587eb142c2-kube-api-access-f47q2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.112657 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.112722 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.112765 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.112818 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.118348 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.120130 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.120399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.129147 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40d979f-5978-45a1-9b88-b4587eb142c2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.131797 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47q2\" (UniqueName: \"kubernetes.io/projected/c40d979f-5978-45a1-9b88-b4587eb142c2-kube-api-access-f47q2\") pod \"nova-cell1-novncproxy-0\" (UID: \"c40d979f-5978-45a1-9b88-b4587eb142c2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.217137 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.374699 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0962251-5751-4fc8-a8c1-c7908e4d1fe9" path="/var/lib/kubelet/pods/e0962251-5751-4fc8-a8c1-c7908e4d1fe9/volumes" Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.671427 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 20:57:28 crc kubenswrapper[4765]: W1203 20:57:28.676067 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40d979f_5978_45a1_9b88_b4587eb142c2.slice/crio-ac11b8ad4c077709874932a7fc4f0ad19bbf02cfcbd86dff21f4db89bde5d83c WatchSource:0}: Error finding container ac11b8ad4c077709874932a7fc4f0ad19bbf02cfcbd86dff21f4db89bde5d83c: Status 404 returned error can't find the container with id ac11b8ad4c077709874932a7fc4f0ad19bbf02cfcbd86dff21f4db89bde5d83c Dec 03 20:57:28 crc kubenswrapper[4765]: I1203 20:57:28.813238 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c40d979f-5978-45a1-9b88-b4587eb142c2","Type":"ContainerStarted","Data":"ac11b8ad4c077709874932a7fc4f0ad19bbf02cfcbd86dff21f4db89bde5d83c"} Dec 03 20:57:29 crc kubenswrapper[4765]: I1203 20:57:29.828503 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c40d979f-5978-45a1-9b88-b4587eb142c2","Type":"ContainerStarted","Data":"d78f6319b1824cf00d4f49c9d57a7a343c7c3bb5377bc6dc88f95c350a44df9e"} Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.388641 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.391973 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.392368 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.392515 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.397026 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.398395 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.431335 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.431278167 podStartE2EDuration="3.431278167s" podCreationTimestamp="2025-12-03 20:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:29.858433268 +0000 UTC m=+1147.788978429" watchObservedRunningTime="2025-12-03 20:57:30.431278167 +0000 UTC m=+1148.361823318" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.630322 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.632485 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.666692 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.771056 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.771119 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.771142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.771235 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fgq\" (UniqueName: \"kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.771282 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.873337 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.873688 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.873711 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.873788 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9fgq\" (UniqueName: \"kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.873826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.874209 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.874367 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.874703 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.874700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.900040 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9fgq\" (UniqueName: \"kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq\") pod \"dnsmasq-dns-68d4b6d797-928b2\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:30 crc kubenswrapper[4765]: I1203 20:57:30.974541 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:31 crc kubenswrapper[4765]: I1203 20:57:31.484431 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:57:31 crc kubenswrapper[4765]: I1203 20:57:31.848604 4765 generic.go:334] "Generic (PLEG): container finished" podID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerID="00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6" exitCode=0 Dec 03 20:57:31 crc kubenswrapper[4765]: I1203 20:57:31.848717 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" event={"ID":"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3","Type":"ContainerDied","Data":"00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6"} Dec 03 20:57:31 crc kubenswrapper[4765]: I1203 20:57:31.848790 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" event={"ID":"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3","Type":"ContainerStarted","Data":"4f2b8d3fca0c15c322776800ee145ae79a86802a886e860392170a32ef2845bd"} Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.773057 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.773623 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-central-agent" containerID="cri-o://b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39" gracePeriod=30 Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.773686 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="sg-core" containerID="cri-o://e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a" gracePeriod=30 Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.773734 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-notification-agent" containerID="cri-o://b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09" gracePeriod=30 Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.773733 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="proxy-httpd" containerID="cri-o://07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718" gracePeriod=30 Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.861350 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" event={"ID":"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3","Type":"ContainerStarted","Data":"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f"} Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.861843 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:32 crc kubenswrapper[4765]: I1203 20:57:32.883748 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" podStartSLOduration=2.88373054 podStartE2EDuration="2.88373054s" podCreationTimestamp="2025-12-03 20:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:32.87960777 +0000 UTC m=+1150.810152941" watchObservedRunningTime="2025-12-03 20:57:32.88373054 +0000 UTC m=+1150.814275691" Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.163107 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.163335 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-log" containerID="cri-o://e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac" gracePeriod=30 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.163452 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-api" containerID="cri-o://67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477" gracePeriod=30 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.218161 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.888728 4765 generic.go:334] "Generic (PLEG): container finished" podID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerID="e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac" exitCode=143 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.888791 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerDied","Data":"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac"} Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899384 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerID="07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718" exitCode=0 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899426 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerID="e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a" exitCode=2 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899436 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerID="b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39" exitCode=0 Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899440 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerDied","Data":"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718"} Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899492 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerDied","Data":"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a"} Dec 03 20:57:33 crc kubenswrapper[4765]: I1203 20:57:33.899505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerDied","Data":"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39"} Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.560341 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.678224 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679161 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pkhl\" (UniqueName: \"kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679184 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679336 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679391 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679424 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679507 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.679632 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data\") pod \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\" (UID: \"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.682782 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.683148 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.689774 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl" (OuterVolumeSpecName: "kube-api-access-4pkhl") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "kube-api-access-4pkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.696950 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.697692 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts" (OuterVolumeSpecName: "scripts") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.735562 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.774549 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782232 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jqjk\" (UniqueName: \"kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk\") pod \"3be613e1-33fd-4e42-9834-d35e9cba181e\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782333 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle\") pod \"3be613e1-33fd-4e42-9834-d35e9cba181e\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782355 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs\") pod \"3be613e1-33fd-4e42-9834-d35e9cba181e\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782444 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data\") pod \"3be613e1-33fd-4e42-9834-d35e9cba181e\" (UID: \"3be613e1-33fd-4e42-9834-d35e9cba181e\") " Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782892 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782907 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782916 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782923 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782932 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.782940 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pkhl\" (UniqueName: \"kubernetes.io/projected/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-kube-api-access-4pkhl\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.783654 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs" (OuterVolumeSpecName: "logs") pod "3be613e1-33fd-4e42-9834-d35e9cba181e" (UID: "3be613e1-33fd-4e42-9834-d35e9cba181e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.786462 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.788543 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk" (OuterVolumeSpecName: "kube-api-access-6jqjk") pod "3be613e1-33fd-4e42-9834-d35e9cba181e" (UID: "3be613e1-33fd-4e42-9834-d35e9cba181e"). InnerVolumeSpecName "kube-api-access-6jqjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.798431 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data" (OuterVolumeSpecName: "config-data") pod "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" (UID: "24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.811202 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data" (OuterVolumeSpecName: "config-data") pod "3be613e1-33fd-4e42-9834-d35e9cba181e" (UID: "3be613e1-33fd-4e42-9834-d35e9cba181e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.813810 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3be613e1-33fd-4e42-9834-d35e9cba181e" (UID: "3be613e1-33fd-4e42-9834-d35e9cba181e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884197 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jqjk\" (UniqueName: \"kubernetes.io/projected/3be613e1-33fd-4e42-9834-d35e9cba181e-kube-api-access-6jqjk\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884230 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884240 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3be613e1-33fd-4e42-9834-d35e9cba181e-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884250 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884259 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be613e1-33fd-4e42-9834-d35e9cba181e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.884267 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.928391 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerID="b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09" exitCode=0 Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.928490 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerDied","Data":"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09"} Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.928517 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.928545 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8","Type":"ContainerDied","Data":"8fe6b8f641ee11c4c5e07866b8dd3840b8064351a2ebd1554af7baf157374752"} Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.928564 4765 scope.go:117] "RemoveContainer" containerID="07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.931539 4765 generic.go:334] "Generic (PLEG): container finished" podID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerID="67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477" exitCode=0 Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.931576 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerDied","Data":"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477"} Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.931602 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3be613e1-33fd-4e42-9834-d35e9cba181e","Type":"ContainerDied","Data":"1a3840d08e38368cec85fe464efc93bc36f6e281b7e049193090f3c709f5d363"} Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.931618 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.958745 4765 scope.go:117] "RemoveContainer" containerID="e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a" Dec 03 20:57:36 crc kubenswrapper[4765]: I1203 20:57:36.983672 4765 scope.go:117] "RemoveContainer" containerID="b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.006011 4765 scope.go:117] "RemoveContainer" containerID="b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.014456 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.028364 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.028461 4765 scope.go:117] "RemoveContainer" containerID="07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.028829 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718\": container with ID starting with 07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718 not found: ID does not exist" containerID="07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.028872 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718"} err="failed to get container status \"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718\": rpc error: code = NotFound desc = could not find container \"07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718\": container with ID starting with 07b268d6a875bd18d1bc5df562bf4c41b7fdefca2cb6066d000b9cef2a34c718 not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.028899 4765 scope.go:117] "RemoveContainer" containerID="e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.029233 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a\": container with ID starting with e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a not found: ID does not exist" containerID="e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.029286 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a"} err="failed to get container status \"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a\": rpc error: code = NotFound desc = could not find container \"e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a\": container with ID starting with e04aff1f79be76c9f3d3bb0a11af5ee283c5537734eef73bb80e2a2b6c36d21a not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.029403 4765 scope.go:117] "RemoveContainer" containerID="b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.029737 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09\": container with ID starting with b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09 not found: ID does not exist" containerID="b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.029776 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09"} err="failed to get container status \"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09\": rpc error: code = NotFound desc = could not find container \"b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09\": container with ID starting with b176a9861bf9ffa3809b2525feb874affe2a4f19c994160beb34042276bf8a09 not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.029804 4765 scope.go:117] "RemoveContainer" containerID="b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.030140 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39\": container with ID starting with b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39 not found: ID does not exist" containerID="b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.030174 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39"} err="failed to get container status \"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39\": rpc error: code = NotFound desc = could not find container \"b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39\": container with ID starting with b70e207558030b0f0b4fcddb5fa2bf370bd48a36e623280dc055b7246185be39 not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.030201 4765 scope.go:117] "RemoveContainer" containerID="67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.036723 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.038961 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="proxy-httpd" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.038996 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="proxy-httpd" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.039023 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="sg-core" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039031 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="sg-core" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.039049 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-central-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039058 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-central-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.039072 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-api" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039081 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-api" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.039102 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-log" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039111 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-log" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.039133 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-notification-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039141 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-notification-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039394 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-central-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039412 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-log" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039431 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="sg-core" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039454 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="proxy-httpd" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039471 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" containerName="nova-api-api" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.039489 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" containerName="ceilometer-notification-agent" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.040906 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.044989 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.045578 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.061356 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.068594 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.068925 4765 scope.go:117] "RemoveContainer" containerID="e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.079379 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.087806 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.088190 4765 scope.go:117] "RemoveContainer" containerID="67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.088575 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477\": container with ID starting with 67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477 not found: ID does not exist" containerID="67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.088601 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477"} err="failed to get container status \"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477\": rpc error: code = NotFound desc = could not find container \"67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477\": container with ID starting with 67a0904505098c5cd470283bbb718d24d4be2750de68a28b5c92441b694ce477 not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.088624 4765 scope.go:117] "RemoveContainer" containerID="e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac" Dec 03 20:57:37 crc kubenswrapper[4765]: E1203 20:57:37.088865 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac\": container with ID starting with e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac not found: ID does not exist" containerID="e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.088887 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac"} err="failed to get container status \"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac\": rpc error: code = NotFound desc = could not find container \"e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac\": container with ID starting with e703d891673fa9f93e4e31df54044a651f11ad17fe666f8a568564a4ab0965ac not found: ID does not exist" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.095773 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.098482 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.100419 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.100668 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.100788 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.106997 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.188679 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.188724 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qrn\" (UniqueName: \"kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.188751 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.188875 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189009 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189043 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189084 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189184 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wt5\" (UniqueName: \"kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189222 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189250 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189293 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189552 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.189604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291264 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291408 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291427 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qrn\" (UniqueName: \"kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291448 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291472 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291504 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291520 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291543 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wt5\" (UniqueName: \"kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291586 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291604 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.291641 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.292440 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.295805 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.297154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.297487 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.298316 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.298701 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.298830 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.299869 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.300914 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.308749 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.313468 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.314544 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.323631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wt5\" (UniqueName: \"kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5\") pod \"nova-api-0\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.324843 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qrn\" (UniqueName: \"kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn\") pod \"ceilometer-0\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.367591 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.417855 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.846083 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.935605 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 20:57:37 crc kubenswrapper[4765]: W1203 20:57:37.941554 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9375a48d_3efa_435a_bc05_c344e97943ff.slice/crio-80fccf6f50f6c91f3805c4a26b11c0ddd4655918f330f38cc462af16df3fd377 WatchSource:0}: Error finding container 80fccf6f50f6c91f3805c4a26b11c0ddd4655918f330f38cc462af16df3fd377: Status 404 returned error can't find the container with id 80fccf6f50f6c91f3805c4a26b11c0ddd4655918f330f38cc462af16df3fd377 Dec 03 20:57:37 crc kubenswrapper[4765]: I1203 20:57:37.953071 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerStarted","Data":"73e56d31f284bfbf187cfcaa83af383d0f4db82c2ec1cbd9a0938cbee4284ff8"} Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.218332 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.259498 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.370113 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8" path="/var/lib/kubelet/pods/24d3dac3-1c93-4bbd-886d-1f8f1a0b3be8/volumes" Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.371452 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be613e1-33fd-4e42-9834-d35e9cba181e" path="/var/lib/kubelet/pods/3be613e1-33fd-4e42-9834-d35e9cba181e/volumes" Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.963087 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerStarted","Data":"65946a80c04385d40c2df28026342005506998cf88de180927e9cca2b0c29b45"} Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.963504 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerStarted","Data":"80fccf6f50f6c91f3805c4a26b11c0ddd4655918f330f38cc462af16df3fd377"} Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.965547 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerStarted","Data":"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e"} Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.965570 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerStarted","Data":"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20"} Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.999255 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 03 20:57:38 crc kubenswrapper[4765]: I1203 20:57:38.999351 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.999337947 podStartE2EDuration="2.999337947s" podCreationTimestamp="2025-12-03 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:38.996562532 +0000 UTC m=+1156.927107683" watchObservedRunningTime="2025-12-03 20:57:38.999337947 +0000 UTC m=+1156.929883108" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.146389 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-s5wxp"] Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.148134 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.150316 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.154711 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.160350 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5wxp"] Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.228382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.228424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.228505 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4lzt\" (UniqueName: \"kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.228526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.330159 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4lzt\" (UniqueName: \"kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.330208 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.330342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.330370 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.335066 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.335070 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.345703 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.350945 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4lzt\" (UniqueName: \"kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt\") pod \"nova-cell1-cell-mapping-s5wxp\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.466876 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.980914 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5wxp"] Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.983038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerStarted","Data":"36408eb3cf5a2a3300b72d4e20e962779d4da944dfc3c0a8a6e2a98a8b4261ef"} Dec 03 20:57:39 crc kubenswrapper[4765]: I1203 20:57:39.983085 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerStarted","Data":"9608b9411b03ef15d0cc3502e92caeec8958997628bf51461ffde67a508dcfe3"} Dec 03 20:57:39 crc kubenswrapper[4765]: W1203 20:57:39.992034 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod529d6306_abe5_44f9_8804_1f374c25cadd.slice/crio-a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741 WatchSource:0}: Error finding container a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741: Status 404 returned error can't find the container with id a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741 Dec 03 20:57:40 crc kubenswrapper[4765]: I1203 20:57:40.975548 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:57:40 crc kubenswrapper[4765]: I1203 20:57:40.998688 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerStarted","Data":"d365a1fc3f649a27cbb47310882ccb510942f2e48e6df5fa64da72415c895cf7"} Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.003947 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.006580 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5wxp" event={"ID":"529d6306-abe5-44f9-8804-1f374c25cadd","Type":"ContainerStarted","Data":"f62f80ad37ec5176f76eba14f823fbe58ecaa4c36e50e2accae28fc58b07d95f"} Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.006673 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5wxp" event={"ID":"529d6306-abe5-44f9-8804-1f374c25cadd","Type":"ContainerStarted","Data":"a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741"} Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.083009 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.083250 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="dnsmasq-dns" containerID="cri-o://eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652" gracePeriod=10 Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.102452 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.384916719 podStartE2EDuration="4.102430496s" podCreationTimestamp="2025-12-03 20:57:37 +0000 UTC" firstStartedPulling="2025-12-03 20:57:37.94450037 +0000 UTC m=+1155.875045521" lastFinishedPulling="2025-12-03 20:57:40.662014117 +0000 UTC m=+1158.592559298" observedRunningTime="2025-12-03 20:57:41.058718678 +0000 UTC m=+1158.989263829" watchObservedRunningTime="2025-12-03 20:57:41.102430496 +0000 UTC m=+1159.032975647" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.121809 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-s5wxp" podStartSLOduration=2.121785088 podStartE2EDuration="2.121785088s" podCreationTimestamp="2025-12-03 20:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:41.080223737 +0000 UTC m=+1159.010768888" watchObservedRunningTime="2025-12-03 20:57:41.121785088 +0000 UTC m=+1159.052330239" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.572638 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.676407 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb\") pod \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.676486 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config\") pod \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.676685 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlvvr\" (UniqueName: \"kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr\") pod \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.676742 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb\") pod \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.676802 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc\") pod \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\" (UID: \"fbb6de7b-6119-49ae-ad28-7d6a92195ab8\") " Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.689496 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr" (OuterVolumeSpecName: "kube-api-access-dlvvr") pod "fbb6de7b-6119-49ae-ad28-7d6a92195ab8" (UID: "fbb6de7b-6119-49ae-ad28-7d6a92195ab8"). InnerVolumeSpecName "kube-api-access-dlvvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.730012 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fbb6de7b-6119-49ae-ad28-7d6a92195ab8" (UID: "fbb6de7b-6119-49ae-ad28-7d6a92195ab8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.739484 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbb6de7b-6119-49ae-ad28-7d6a92195ab8" (UID: "fbb6de7b-6119-49ae-ad28-7d6a92195ab8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.740148 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbb6de7b-6119-49ae-ad28-7d6a92195ab8" (UID: "fbb6de7b-6119-49ae-ad28-7d6a92195ab8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.741070 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config" (OuterVolumeSpecName: "config") pod "fbb6de7b-6119-49ae-ad28-7d6a92195ab8" (UID: "fbb6de7b-6119-49ae-ad28-7d6a92195ab8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.778444 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.778472 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.778485 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.778495 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlvvr\" (UniqueName: \"kubernetes.io/projected/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-kube-api-access-dlvvr\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:41 crc kubenswrapper[4765]: I1203 20:57:41.778506 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb6de7b-6119-49ae-ad28-7d6a92195ab8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.017523 4765 generic.go:334] "Generic (PLEG): container finished" podID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerID="eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652" exitCode=0 Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.017557 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" event={"ID":"fbb6de7b-6119-49ae-ad28-7d6a92195ab8","Type":"ContainerDied","Data":"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652"} Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.017596 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.017617 4765 scope.go:117] "RemoveContainer" containerID="eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.017605 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-txjf9" event={"ID":"fbb6de7b-6119-49ae-ad28-7d6a92195ab8","Type":"ContainerDied","Data":"9cdfc316ec2416d8a80e10879aeca558d888c1a50e276d3ceef7715b407d848a"} Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.042624 4765 scope.go:117] "RemoveContainer" containerID="f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.058281 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.066963 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-txjf9"] Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.075259 4765 scope.go:117] "RemoveContainer" containerID="eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652" Dec 03 20:57:42 crc kubenswrapper[4765]: E1203 20:57:42.075700 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652\": container with ID starting with eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652 not found: ID does not exist" containerID="eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.075752 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652"} err="failed to get container status \"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652\": rpc error: code = NotFound desc = could not find container \"eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652\": container with ID starting with eaae592e49de16e7d5d3390c0783710170f86fcdfd3ead53a670baf855ad1652 not found: ID does not exist" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.075779 4765 scope.go:117] "RemoveContainer" containerID="f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15" Dec 03 20:57:42 crc kubenswrapper[4765]: E1203 20:57:42.076426 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15\": container with ID starting with f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15 not found: ID does not exist" containerID="f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.076469 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15"} err="failed to get container status \"f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15\": rpc error: code = NotFound desc = could not find container \"f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15\": container with ID starting with f37b282277d664f23f8795e3fd4529e48df663db2dfc3cb00310b3edb514ae15 not found: ID does not exist" Dec 03 20:57:42 crc kubenswrapper[4765]: I1203 20:57:42.382468 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" path="/var/lib/kubelet/pods/fbb6de7b-6119-49ae-ad28-7d6a92195ab8/volumes" Dec 03 20:57:45 crc kubenswrapper[4765]: I1203 20:57:45.049025 4765 generic.go:334] "Generic (PLEG): container finished" podID="529d6306-abe5-44f9-8804-1f374c25cadd" containerID="f62f80ad37ec5176f76eba14f823fbe58ecaa4c36e50e2accae28fc58b07d95f" exitCode=0 Dec 03 20:57:45 crc kubenswrapper[4765]: I1203 20:57:45.049091 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5wxp" event={"ID":"529d6306-abe5-44f9-8804-1f374c25cadd","Type":"ContainerDied","Data":"f62f80ad37ec5176f76eba14f823fbe58ecaa4c36e50e2accae28fc58b07d95f"} Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.407534 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.472614 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle\") pod \"529d6306-abe5-44f9-8804-1f374c25cadd\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.472693 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4lzt\" (UniqueName: \"kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt\") pod \"529d6306-abe5-44f9-8804-1f374c25cadd\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.472793 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts\") pod \"529d6306-abe5-44f9-8804-1f374c25cadd\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.472933 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data\") pod \"529d6306-abe5-44f9-8804-1f374c25cadd\" (UID: \"529d6306-abe5-44f9-8804-1f374c25cadd\") " Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.478171 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt" (OuterVolumeSpecName: "kube-api-access-f4lzt") pod "529d6306-abe5-44f9-8804-1f374c25cadd" (UID: "529d6306-abe5-44f9-8804-1f374c25cadd"). InnerVolumeSpecName "kube-api-access-f4lzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.478799 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts" (OuterVolumeSpecName: "scripts") pod "529d6306-abe5-44f9-8804-1f374c25cadd" (UID: "529d6306-abe5-44f9-8804-1f374c25cadd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.501649 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data" (OuterVolumeSpecName: "config-data") pod "529d6306-abe5-44f9-8804-1f374c25cadd" (UID: "529d6306-abe5-44f9-8804-1f374c25cadd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.507257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "529d6306-abe5-44f9-8804-1f374c25cadd" (UID: "529d6306-abe5-44f9-8804-1f374c25cadd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.576262 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.576319 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.576330 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529d6306-abe5-44f9-8804-1f374c25cadd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:46 crc kubenswrapper[4765]: I1203 20:57:46.576342 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4lzt\" (UniqueName: \"kubernetes.io/projected/529d6306-abe5-44f9-8804-1f374c25cadd-kube-api-access-f4lzt\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.075381 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5wxp" event={"ID":"529d6306-abe5-44f9-8804-1f374c25cadd","Type":"ContainerDied","Data":"a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741"} Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.075460 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a43302af48bf3279901748e5716ff558c2bfc9dabaedae650d9e003ca9338741" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.075614 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5wxp" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.252795 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.253606 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="59204443-4c2b-45aa-97b4-75d33207cc52" containerName="nova-scheduler-scheduler" containerID="cri-o://6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4" gracePeriod=30 Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.264891 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.265154 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-log" containerID="cri-o://923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" gracePeriod=30 Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.265283 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-api" containerID="cri-o://7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" gracePeriod=30 Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.283736 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.284362 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" containerID="cri-o://4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6" gracePeriod=30 Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.284482 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" containerID="cri-o://037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99" gracePeriod=30 Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.830552 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902271 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902372 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wt5\" (UniqueName: \"kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902494 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902581 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902666 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.902767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle\") pod \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\" (UID: \"d4d38b18-fadf-4e12-bc90-a52b195ecd5a\") " Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.905650 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs" (OuterVolumeSpecName: "logs") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.918491 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5" (OuterVolumeSpecName: "kube-api-access-j2wt5") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "kube-api-access-j2wt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.946355 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.949748 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data" (OuterVolumeSpecName: "config-data") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.961725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.969989 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4d38b18-fadf-4e12-bc90-a52b195ecd5a" (UID: "d4d38b18-fadf-4e12-bc90-a52b195ecd5a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:47 crc kubenswrapper[4765]: I1203 20:57:47.973464 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.004952 4765 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.004999 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.005014 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.005027 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.005039 4765 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.005050 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wt5\" (UniqueName: \"kubernetes.io/projected/d4d38b18-fadf-4e12-bc90-a52b195ecd5a-kube-api-access-j2wt5\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.084198 4765 generic.go:334] "Generic (PLEG): container finished" podID="59204443-4c2b-45aa-97b4-75d33207cc52" containerID="6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4" exitCode=0 Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.084267 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59204443-4c2b-45aa-97b4-75d33207cc52","Type":"ContainerDied","Data":"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.084312 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59204443-4c2b-45aa-97b4-75d33207cc52","Type":"ContainerDied","Data":"a901ff1e57fc0e54a7de27363e76037c9eb3495e2dd86d3c97cb42ab5d43aed0"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.084330 4765 scope.go:117] "RemoveContainer" containerID="6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.084654 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.086241 4765 generic.go:334] "Generic (PLEG): container finished" podID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerID="7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" exitCode=0 Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.086378 4765 generic.go:334] "Generic (PLEG): container finished" podID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerID="923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" exitCode=143 Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.086283 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.086315 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerDied","Data":"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.087295 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerDied","Data":"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.087364 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4d38b18-fadf-4e12-bc90-a52b195ecd5a","Type":"ContainerDied","Data":"73e56d31f284bfbf187cfcaa83af383d0f4db82c2ec1cbd9a0938cbee4284ff8"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.093313 4765 generic.go:334] "Generic (PLEG): container finished" podID="dc34be1f-8c78-47ab-9adf-019219027643" containerID="4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6" exitCode=143 Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.093359 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerDied","Data":"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6"} Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.106407 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle\") pod \"59204443-4c2b-45aa-97b4-75d33207cc52\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.106543 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzwvr\" (UniqueName: \"kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr\") pod \"59204443-4c2b-45aa-97b4-75d33207cc52\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.106656 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data\") pod \"59204443-4c2b-45aa-97b4-75d33207cc52\" (UID: \"59204443-4c2b-45aa-97b4-75d33207cc52\") " Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.110018 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr" (OuterVolumeSpecName: "kube-api-access-jzwvr") pod "59204443-4c2b-45aa-97b4-75d33207cc52" (UID: "59204443-4c2b-45aa-97b4-75d33207cc52"). InnerVolumeSpecName "kube-api-access-jzwvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.110029 4765 scope.go:117] "RemoveContainer" containerID="6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.117672 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4\": container with ID starting with 6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4 not found: ID does not exist" containerID="6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.117724 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4"} err="failed to get container status \"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4\": rpc error: code = NotFound desc = could not find container \"6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4\": container with ID starting with 6f1ffaba5eb4a9fb8444549bbee9796b41d62a709c820f8edcc936e555afdce4 not found: ID does not exist" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.117769 4765 scope.go:117] "RemoveContainer" containerID="7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.129444 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.140695 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.148493 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data" (OuterVolumeSpecName: "config-data") pod "59204443-4c2b-45aa-97b4-75d33207cc52" (UID: "59204443-4c2b-45aa-97b4-75d33207cc52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.150958 4765 scope.go:117] "RemoveContainer" containerID="923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.154260 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59204443-4c2b-45aa-97b4-75d33207cc52" (UID: "59204443-4c2b-45aa-97b4-75d33207cc52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.154759 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.155461 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="dnsmasq-dns" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.155592 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="dnsmasq-dns" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.155721 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529d6306-abe5-44f9-8804-1f374c25cadd" containerName="nova-manage" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.155822 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="529d6306-abe5-44f9-8804-1f374c25cadd" containerName="nova-manage" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.155992 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59204443-4c2b-45aa-97b4-75d33207cc52" containerName="nova-scheduler-scheduler" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.156120 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="59204443-4c2b-45aa-97b4-75d33207cc52" containerName="nova-scheduler-scheduler" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.156280 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-log" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.156420 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-log" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.156556 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="init" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.156690 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="init" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.156813 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-api" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.156918 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-api" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.157378 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="59204443-4c2b-45aa-97b4-75d33207cc52" containerName="nova-scheduler-scheduler" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.157562 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-log" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.157691 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="529d6306-abe5-44f9-8804-1f374c25cadd" containerName="nova-manage" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.157818 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" containerName="nova-api-api" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.157921 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb6de7b-6119-49ae-ad28-7d6a92195ab8" containerName="dnsmasq-dns" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.159593 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.164215 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.164273 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.165163 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.176004 4765 scope.go:117] "RemoveContainer" containerID="7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.178291 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e\": container with ID starting with 7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e not found: ID does not exist" containerID="7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.178400 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e"} err="failed to get container status \"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e\": rpc error: code = NotFound desc = could not find container \"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e\": container with ID starting with 7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e not found: ID does not exist" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.178492 4765 scope.go:117] "RemoveContainer" containerID="923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" Dec 03 20:57:48 crc kubenswrapper[4765]: E1203 20:57:48.178738 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20\": container with ID starting with 923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20 not found: ID does not exist" containerID="923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.178812 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20"} err="failed to get container status \"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20\": rpc error: code = NotFound desc = could not find container \"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20\": container with ID starting with 923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20 not found: ID does not exist" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.178880 4765 scope.go:117] "RemoveContainer" containerID="7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.184459 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.190785 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e"} err="failed to get container status \"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e\": rpc error: code = NotFound desc = could not find container \"7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e\": container with ID starting with 7d5dc78a6feaba399fa94df3b4f1adc58380a4b4de5fc727fba9392909b0973e not found: ID does not exist" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.190824 4765 scope.go:117] "RemoveContainer" containerID="923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.191060 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20"} err="failed to get container status \"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20\": rpc error: code = NotFound desc = could not find container \"923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20\": container with ID starting with 923f0d8a7ae61d516392e5b9fb907d52ec1437585c616934dfea9d993f930d20 not found: ID does not exist" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208065 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208149 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f90377-318a-4a36-a187-62434c1fb8c3-logs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208180 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208206 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208235 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw2f\" (UniqueName: \"kubernetes.io/projected/c3f90377-318a-4a36-a187-62434c1fb8c3-kube-api-access-cxw2f\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208257 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-config-data\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208452 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzwvr\" (UniqueName: \"kubernetes.io/projected/59204443-4c2b-45aa-97b4-75d33207cc52-kube-api-access-jzwvr\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208470 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.208482 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59204443-4c2b-45aa-97b4-75d33207cc52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310419 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310498 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f90377-318a-4a36-a187-62434c1fb8c3-logs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310528 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310545 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310575 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw2f\" (UniqueName: \"kubernetes.io/projected/c3f90377-318a-4a36-a187-62434c1fb8c3-kube-api-access-cxw2f\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.310600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-config-data\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.311783 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f90377-318a-4a36-a187-62434c1fb8c3-logs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.321340 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-config-data\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.321483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.321644 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.330574 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw2f\" (UniqueName: \"kubernetes.io/projected/c3f90377-318a-4a36-a187-62434c1fb8c3-kube-api-access-cxw2f\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.331938 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f90377-318a-4a36-a187-62434c1fb8c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3f90377-318a-4a36-a187-62434c1fb8c3\") " pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.370624 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d38b18-fadf-4e12-bc90-a52b195ecd5a" path="/var/lib/kubelet/pods/d4d38b18-fadf-4e12-bc90-a52b195ecd5a/volumes" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.459217 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.469537 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.478394 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.479495 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.481970 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.485753 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.491754 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.629632 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474n7\" (UniqueName: \"kubernetes.io/projected/6963989c-bc38-471a-a22a-c7e90de20bf9-kube-api-access-474n7\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.629784 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-config-data\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.629846 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.731292 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.731407 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-474n7\" (UniqueName: \"kubernetes.io/projected/6963989c-bc38-471a-a22a-c7e90de20bf9-kube-api-access-474n7\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.731532 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-config-data\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.737006 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-config-data\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.737827 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6963989c-bc38-471a-a22a-c7e90de20bf9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.749062 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-474n7\" (UniqueName: \"kubernetes.io/projected/6963989c-bc38-471a-a22a-c7e90de20bf9-kube-api-access-474n7\") pod \"nova-scheduler-0\" (UID: \"6963989c-bc38-471a-a22a-c7e90de20bf9\") " pod="openstack/nova-scheduler-0" Dec 03 20:57:48 crc kubenswrapper[4765]: I1203 20:57:48.792726 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 20:57:49 crc kubenswrapper[4765]: I1203 20:57:49.012371 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 20:57:49 crc kubenswrapper[4765]: I1203 20:57:49.104687 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3f90377-318a-4a36-a187-62434c1fb8c3","Type":"ContainerStarted","Data":"27854776ea7df2d4bfe2528c5d41a3449a6e6a5f1ba465576f91fdbb0e334fb0"} Dec 03 20:57:49 crc kubenswrapper[4765]: I1203 20:57:49.217513 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 20:57:49 crc kubenswrapper[4765]: W1203 20:57:49.219703 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6963989c_bc38_471a_a22a_c7e90de20bf9.slice/crio-5b2a0e37fcd49a3599230893e91982d9b026411d0b0334774cc450f8ce9fb646 WatchSource:0}: Error finding container 5b2a0e37fcd49a3599230893e91982d9b026411d0b0334774cc450f8ce9fb646: Status 404 returned error can't find the container with id 5b2a0e37fcd49a3599230893e91982d9b026411d0b0334774cc450f8ce9fb646 Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.120823 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3f90377-318a-4a36-a187-62434c1fb8c3","Type":"ContainerStarted","Data":"d98608aece8290300cdba3092087e346f8b05602dbbecd31cd322e30da49e6ea"} Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.120890 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3f90377-318a-4a36-a187-62434c1fb8c3","Type":"ContainerStarted","Data":"de304629efe468cb911e7aa3d631a9bf8fffc9aaac8da9e2fc4a494484ea72f5"} Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.123923 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6963989c-bc38-471a-a22a-c7e90de20bf9","Type":"ContainerStarted","Data":"da116ac4e31c4fe4c748fc197ba79628b45b8048765b2ebaca9cbfaef9a6b95a"} Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.123979 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6963989c-bc38-471a-a22a-c7e90de20bf9","Type":"ContainerStarted","Data":"5b2a0e37fcd49a3599230893e91982d9b026411d0b0334774cc450f8ce9fb646"} Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.154921 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.154903773 podStartE2EDuration="2.154903773s" podCreationTimestamp="2025-12-03 20:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:50.140206254 +0000 UTC m=+1168.070751415" watchObservedRunningTime="2025-12-03 20:57:50.154903773 +0000 UTC m=+1168.085448914" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.179233 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.179218911 podStartE2EDuration="2.179218911s" podCreationTimestamp="2025-12-03 20:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:50.171731269 +0000 UTC m=+1168.102276420" watchObservedRunningTime="2025-12-03 20:57:50.179218911 +0000 UTC m=+1168.109764062" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.382083 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59204443-4c2b-45aa-97b4-75d33207cc52" path="/var/lib/kubelet/pods/59204443-4c2b-45aa-97b4-75d33207cc52/volumes" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.442935 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": read tcp 10.217.0.2:48938->10.217.0.177:8775: read: connection reset by peer" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.443017 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": read tcp 10.217.0.2:48946->10.217.0.177:8775: read: connection reset by peer" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.894130 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.982949 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle\") pod \"dc34be1f-8c78-47ab-9adf-019219027643\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.982997 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs\") pod \"dc34be1f-8c78-47ab-9adf-019219027643\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.983166 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28kpv\" (UniqueName: \"kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv\") pod \"dc34be1f-8c78-47ab-9adf-019219027643\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.983229 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs\") pod \"dc34be1f-8c78-47ab-9adf-019219027643\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.983322 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data\") pod \"dc34be1f-8c78-47ab-9adf-019219027643\" (UID: \"dc34be1f-8c78-47ab-9adf-019219027643\") " Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.984208 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs" (OuterVolumeSpecName: "logs") pod "dc34be1f-8c78-47ab-9adf-019219027643" (UID: "dc34be1f-8c78-47ab-9adf-019219027643"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:57:50 crc kubenswrapper[4765]: I1203 20:57:50.994057 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv" (OuterVolumeSpecName: "kube-api-access-28kpv") pod "dc34be1f-8c78-47ab-9adf-019219027643" (UID: "dc34be1f-8c78-47ab-9adf-019219027643"). InnerVolumeSpecName "kube-api-access-28kpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.019874 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc34be1f-8c78-47ab-9adf-019219027643" (UID: "dc34be1f-8c78-47ab-9adf-019219027643"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.022138 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data" (OuterVolumeSpecName: "config-data") pod "dc34be1f-8c78-47ab-9adf-019219027643" (UID: "dc34be1f-8c78-47ab-9adf-019219027643"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.033026 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc34be1f-8c78-47ab-9adf-019219027643" (UID: "dc34be1f-8c78-47ab-9adf-019219027643"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.085501 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc34be1f-8c78-47ab-9adf-019219027643-logs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.085539 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.085551 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.085561 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc34be1f-8c78-47ab-9adf-019219027643-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.085571 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28kpv\" (UniqueName: \"kubernetes.io/projected/dc34be1f-8c78-47ab-9adf-019219027643-kube-api-access-28kpv\") on node \"crc\" DevicePath \"\"" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.142072 4765 generic.go:334] "Generic (PLEG): container finished" podID="dc34be1f-8c78-47ab-9adf-019219027643" containerID="037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99" exitCode=0 Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.142160 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerDied","Data":"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99"} Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.142208 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc34be1f-8c78-47ab-9adf-019219027643","Type":"ContainerDied","Data":"a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa"} Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.142226 4765 scope.go:117] "RemoveContainer" containerID="037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.143106 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.173857 4765 scope.go:117] "RemoveContainer" containerID="4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.188674 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.197355 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.212605 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:51 crc kubenswrapper[4765]: E1203 20:57:51.213075 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213096 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" Dec 03 20:57:51 crc kubenswrapper[4765]: E1203 20:57:51.213120 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213124 4765 scope.go:117] "RemoveContainer" containerID="037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213131 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213565 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-log" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213591 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc34be1f-8c78-47ab-9adf-019219027643" containerName="nova-metadata-metadata" Dec 03 20:57:51 crc kubenswrapper[4765]: E1203 20:57:51.213683 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99\": container with ID starting with 037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99 not found: ID does not exist" containerID="037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213731 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99"} err="failed to get container status \"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99\": rpc error: code = NotFound desc = could not find container \"037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99\": container with ID starting with 037b120b9d3db5927416ca39dedfe57959884fb08e30ea911e2bdbf2a949ea99 not found: ID does not exist" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.213761 4765 scope.go:117] "RemoveContainer" containerID="4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6" Dec 03 20:57:51 crc kubenswrapper[4765]: E1203 20:57:51.214092 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6\": container with ID starting with 4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6 not found: ID does not exist" containerID="4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.214142 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6"} err="failed to get container status \"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6\": rpc error: code = NotFound desc = could not find container \"4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6\": container with ID starting with 4d8ceae4d9ce66465d67df7d7eb5d19105baed55ed37df11f37c8fd180c6d9b6 not found: ID does not exist" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.214538 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.217690 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.229623 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.237049 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.289507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-config-data\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.289760 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11f148f-d7db-4776-a326-cb655caf8b19-logs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.289813 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.289841 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2m9\" (UniqueName: \"kubernetes.io/projected/c11f148f-d7db-4776-a326-cb655caf8b19-kube-api-access-lv2m9\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.289874 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: E1203 20:57:51.358106 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc34be1f_8c78_47ab_9adf_019219027643.slice/crio-a30328bc9733ce8826400c387bbe465976bd41009aeb04c79e760e2b85edb6aa\": RecentStats: unable to find data in memory cache]" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.391381 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11f148f-d7db-4776-a326-cb655caf8b19-logs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.391457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.391489 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2m9\" (UniqueName: \"kubernetes.io/projected/c11f148f-d7db-4776-a326-cb655caf8b19-kube-api-access-lv2m9\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.391522 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.391553 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-config-data\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.392177 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11f148f-d7db-4776-a326-cb655caf8b19-logs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.395207 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.396831 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-config-data\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.398073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f148f-d7db-4776-a326-cb655caf8b19-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.411507 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2m9\" (UniqueName: \"kubernetes.io/projected/c11f148f-d7db-4776-a326-cb655caf8b19-kube-api-access-lv2m9\") pod \"nova-metadata-0\" (UID: \"c11f148f-d7db-4776-a326-cb655caf8b19\") " pod="openstack/nova-metadata-0" Dec 03 20:57:51 crc kubenswrapper[4765]: I1203 20:57:51.549281 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 20:57:52 crc kubenswrapper[4765]: I1203 20:57:52.054066 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 20:57:52 crc kubenswrapper[4765]: W1203 20:57:52.064579 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11f148f_d7db_4776_a326_cb655caf8b19.slice/crio-7bc4fe1342fce6d217e42b52924ff7e1df963a5e38cff5e2cd9eba04a60e3f72 WatchSource:0}: Error finding container 7bc4fe1342fce6d217e42b52924ff7e1df963a5e38cff5e2cd9eba04a60e3f72: Status 404 returned error can't find the container with id 7bc4fe1342fce6d217e42b52924ff7e1df963a5e38cff5e2cd9eba04a60e3f72 Dec 03 20:57:52 crc kubenswrapper[4765]: I1203 20:57:52.154032 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c11f148f-d7db-4776-a326-cb655caf8b19","Type":"ContainerStarted","Data":"7bc4fe1342fce6d217e42b52924ff7e1df963a5e38cff5e2cd9eba04a60e3f72"} Dec 03 20:57:52 crc kubenswrapper[4765]: I1203 20:57:52.392345 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc34be1f-8c78-47ab-9adf-019219027643" path="/var/lib/kubelet/pods/dc34be1f-8c78-47ab-9adf-019219027643/volumes" Dec 03 20:57:53 crc kubenswrapper[4765]: I1203 20:57:53.168630 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c11f148f-d7db-4776-a326-cb655caf8b19","Type":"ContainerStarted","Data":"25ca103d6459a38b91d19a6f583ce7244d4915a53ee30ba9e371c520dd931c11"} Dec 03 20:57:53 crc kubenswrapper[4765]: I1203 20:57:53.169384 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c11f148f-d7db-4776-a326-cb655caf8b19","Type":"ContainerStarted","Data":"3ec403b1f651559cf199a1e6db01c5e1e4802e2a23205b4f0505c2f1a6ee9b04"} Dec 03 20:57:53 crc kubenswrapper[4765]: I1203 20:57:53.192422 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.192407489 podStartE2EDuration="2.192407489s" podCreationTimestamp="2025-12-03 20:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:57:53.189371657 +0000 UTC m=+1171.119916818" watchObservedRunningTime="2025-12-03 20:57:53.192407489 +0000 UTC m=+1171.122952640" Dec 03 20:57:53 crc kubenswrapper[4765]: I1203 20:57:53.793049 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 20:57:56 crc kubenswrapper[4765]: I1203 20:57:56.550103 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 20:57:56 crc kubenswrapper[4765]: I1203 20:57:56.551788 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 20:57:58 crc kubenswrapper[4765]: I1203 20:57:58.486396 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:58 crc kubenswrapper[4765]: I1203 20:57:58.486477 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 20:57:58 crc kubenswrapper[4765]: I1203 20:57:58.793998 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 20:57:58 crc kubenswrapper[4765]: I1203 20:57:58.823672 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 20:57:59 crc kubenswrapper[4765]: I1203 20:57:59.350526 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 20:57:59 crc kubenswrapper[4765]: I1203 20:57:59.499496 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3f90377-318a-4a36-a187-62434c1fb8c3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:57:59 crc kubenswrapper[4765]: I1203 20:57:59.499531 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3f90377-318a-4a36-a187-62434c1fb8c3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:58:01 crc kubenswrapper[4765]: I1203 20:58:01.551538 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 20:58:01 crc kubenswrapper[4765]: I1203 20:58:01.551674 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 20:58:02 crc kubenswrapper[4765]: I1203 20:58:02.563612 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c11f148f-d7db-4776-a326-cb655caf8b19" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:58:02 crc kubenswrapper[4765]: I1203 20:58:02.563702 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c11f148f-d7db-4776-a326-cb655caf8b19" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 20:58:07 crc kubenswrapper[4765]: I1203 20:58:07.438377 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 20:58:08 crc kubenswrapper[4765]: I1203 20:58:08.502597 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 20:58:08 crc kubenswrapper[4765]: I1203 20:58:08.503975 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 20:58:08 crc kubenswrapper[4765]: I1203 20:58:08.504691 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 20:58:08 crc kubenswrapper[4765]: I1203 20:58:08.511047 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 20:58:09 crc kubenswrapper[4765]: I1203 20:58:09.378107 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 20:58:09 crc kubenswrapper[4765]: I1203 20:58:09.384029 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 20:58:11 crc kubenswrapper[4765]: I1203 20:58:11.561846 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 20:58:11 crc kubenswrapper[4765]: I1203 20:58:11.568144 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 20:58:11 crc kubenswrapper[4765]: I1203 20:58:11.570728 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 20:58:12 crc kubenswrapper[4765]: I1203 20:58:12.421241 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 20:58:21 crc kubenswrapper[4765]: I1203 20:58:21.762998 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:23 crc kubenswrapper[4765]: I1203 20:58:23.170945 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:24 crc kubenswrapper[4765]: I1203 20:58:24.798249 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:58:24 crc kubenswrapper[4765]: I1203 20:58:24.798635 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:58:25 crc kubenswrapper[4765]: I1203 20:58:25.712080 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="rabbitmq" containerID="cri-o://bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8" gracePeriod=604797 Dec 03 20:58:27 crc kubenswrapper[4765]: I1203 20:58:27.276196 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="rabbitmq" containerID="cri-o://d0b976c3a48dfbe3f96de0a41b56dd2a9d4b600dd0c200a778ad93cda95cb6ea" gracePeriod=604796 Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.465772 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557130 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557215 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557265 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557283 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557385 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557431 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv4sn\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557473 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557505 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557531 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557551 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.557574 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data\") pod \"33fa4225-5981-4b62-ac67-674896fbc047\" (UID: \"33fa4225-5981-4b62-ac67-674896fbc047\") " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.558267 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.558650 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.558801 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.563370 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.567761 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info" (OuterVolumeSpecName: "pod-info") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.568349 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn" (OuterVolumeSpecName: "kube-api-access-kv4sn") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "kube-api-access-kv4sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.568470 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.594284 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.603521 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data" (OuterVolumeSpecName: "config-data") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.638075 4765 generic.go:334] "Generic (PLEG): container finished" podID="33fa4225-5981-4b62-ac67-674896fbc047" containerID="bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8" exitCode=0 Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.638130 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerDied","Data":"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8"} Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.638162 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"33fa4225-5981-4b62-ac67-674896fbc047","Type":"ContainerDied","Data":"b4535d55aad41d8b619321ee742477798b0064d82a1ddeb1fb0ef78119875659"} Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.638183 4765 scope.go:117] "RemoveContainer" containerID="bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.638352 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.643583 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf" (OuterVolumeSpecName: "server-conf") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659592 4765 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659622 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659642 4765 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33fa4225-5981-4b62-ac67-674896fbc047-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659668 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659680 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659689 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv4sn\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-kube-api-access-kv4sn\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659697 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659705 4765 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659713 4765 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33fa4225-5981-4b62-ac67-674896fbc047-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.659721 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33fa4225-5981-4b62-ac67-674896fbc047-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.689548 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.693435 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "33fa4225-5981-4b62-ac67-674896fbc047" (UID: "33fa4225-5981-4b62-ac67-674896fbc047"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.695213 4765 scope.go:117] "RemoveContainer" containerID="0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.724256 4765 scope.go:117] "RemoveContainer" containerID="bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8" Dec 03 20:58:32 crc kubenswrapper[4765]: E1203 20:58:32.724848 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8\": container with ID starting with bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8 not found: ID does not exist" containerID="bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.724967 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8"} err="failed to get container status \"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8\": rpc error: code = NotFound desc = could not find container \"bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8\": container with ID starting with bb22a54feadfb62c043dda14a2f663f4cc5a52717160040c40e8fdf306b66ef8 not found: ID does not exist" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.725064 4765 scope.go:117] "RemoveContainer" containerID="0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb" Dec 03 20:58:32 crc kubenswrapper[4765]: E1203 20:58:32.725427 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb\": container with ID starting with 0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb not found: ID does not exist" containerID="0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.725453 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb"} err="failed to get container status \"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb\": rpc error: code = NotFound desc = could not find container \"0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb\": container with ID starting with 0258c2fd08b93e94c4766dce779ed2e82be25c7a01ce40dfbccbb69b6c69c3eb not found: ID does not exist" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.761454 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33fa4225-5981-4b62-ac67-674896fbc047-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.761690 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:32 crc kubenswrapper[4765]: I1203 20:58:32.998178 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.011846 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.024495 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:33 crc kubenswrapper[4765]: E1203 20:58:33.024865 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="rabbitmq" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.024886 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="rabbitmq" Dec 03 20:58:33 crc kubenswrapper[4765]: E1203 20:58:33.024937 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="setup-container" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.024947 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="setup-container" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.025162 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fa4225-5981-4b62-ac67-674896fbc047" containerName="rabbitmq" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.026363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.029106 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bz8v8" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.029431 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.029510 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.029565 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.030866 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.030959 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.030998 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.033170 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167660 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be5953f-7d37-4d82-8ea7-3cff10d763c1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167701 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167735 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167755 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167809 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167853 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be5953f-7d37-4d82-8ea7-3cff10d763c1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167897 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbr5b\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-kube-api-access-mbr5b\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167922 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167949 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.167996 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.168024 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269417 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be5953f-7d37-4d82-8ea7-3cff10d763c1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269468 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbr5b\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-kube-api-access-mbr5b\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269525 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269626 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be5953f-7d37-4d82-8ea7-3cff10d763c1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269659 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269676 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.269710 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.270058 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.270393 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.270565 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.271116 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-config-data\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.271334 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.272553 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0be5953f-7d37-4d82-8ea7-3cff10d763c1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.274220 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0be5953f-7d37-4d82-8ea7-3cff10d763c1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.274574 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.275299 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.288505 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0be5953f-7d37-4d82-8ea7-3cff10d763c1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.289196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbr5b\" (UniqueName: \"kubernetes.io/projected/0be5953f-7d37-4d82-8ea7-3cff10d763c1-kube-api-access-mbr5b\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.301859 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0be5953f-7d37-4d82-8ea7-3cff10d763c1\") " pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.348000 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.654364 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerID="d0b976c3a48dfbe3f96de0a41b56dd2a9d4b600dd0c200a778ad93cda95cb6ea" exitCode=0 Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.654415 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerDied","Data":"d0b976c3a48dfbe3f96de0a41b56dd2a9d4b600dd0c200a778ad93cda95cb6ea"} Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.861557 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.873765 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981386 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981464 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981524 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981555 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981599 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981673 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981716 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981743 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981906 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981945 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.981964 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhwp\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp\") pod \"1fed9c9a-215a-4bd8-9381-6c20099e434d\" (UID: \"1fed9c9a-215a-4bd8-9381-6c20099e434d\") " Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.984817 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.984882 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.985428 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.987849 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:33 crc kubenswrapper[4765]: I1203 20:58:33.987957 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.001244 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp" (OuterVolumeSpecName: "kube-api-access-cbhwp") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "kube-api-access-cbhwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.005514 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.005534 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info" (OuterVolumeSpecName: "pod-info") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.024619 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data" (OuterVolumeSpecName: "config-data") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.074995 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf" (OuterVolumeSpecName: "server-conf") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.083908 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.083947 4765 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.083959 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhwp\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-kube-api-access-cbhwp\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.083987 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084000 4765 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fed9c9a-215a-4bd8-9381-6c20099e434d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084012 4765 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fed9c9a-215a-4bd8-9381-6c20099e434d-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084024 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084034 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084045 4765 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fed9c9a-215a-4bd8-9381-6c20099e434d-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.084058 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.099998 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1fed9c9a-215a-4bd8-9381-6c20099e434d" (UID: "1fed9c9a-215a-4bd8-9381-6c20099e434d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.113127 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.187003 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.187034 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fed9c9a-215a-4bd8-9381-6c20099e434d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.369215 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fa4225-5981-4b62-ac67-674896fbc047" path="/var/lib/kubelet/pods/33fa4225-5981-4b62-ac67-674896fbc047/volumes" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.665847 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be5953f-7d37-4d82-8ea7-3cff10d763c1","Type":"ContainerStarted","Data":"712132da359ffb631bc274d732e891e5e1e3047e995a967d522d70f4b2bb1ef8"} Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.667929 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fed9c9a-215a-4bd8-9381-6c20099e434d","Type":"ContainerDied","Data":"47da83352684c164934c64ec2de1aa3bc946a62d6945c358c1152591fc2d53fe"} Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.667971 4765 scope.go:117] "RemoveContainer" containerID="d0b976c3a48dfbe3f96de0a41b56dd2a9d4b600dd0c200a778ad93cda95cb6ea" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.668001 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.689753 4765 scope.go:117] "RemoveContainer" containerID="61c07efe7283a35939fa86ec93228bad0cd86cad45e92d2b77fb644dd2ded0af" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.707627 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.728816 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.739419 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:34 crc kubenswrapper[4765]: E1203 20:58:34.740897 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="rabbitmq" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.740924 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="rabbitmq" Dec 03 20:58:34 crc kubenswrapper[4765]: E1203 20:58:34.740950 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="setup-container" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.740959 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="setup-container" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.741289 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" containerName="rabbitmq" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.742706 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.750861 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.751147 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.751557 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.751616 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.752014 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.752081 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w2d8x" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.752032 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.772659 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810590 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psg7s\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-kube-api-access-psg7s\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810647 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0d9b22a-4baf-4947-bbba-e158c4e554e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810681 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810728 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810765 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.810968 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0d9b22a-4baf-4947-bbba-e158c4e554e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.811020 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.811053 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.811092 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.811112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.811142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912068 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psg7s\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-kube-api-access-psg7s\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912458 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0d9b22a-4baf-4947-bbba-e158c4e554e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912514 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912537 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912694 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0d9b22a-4baf-4947-bbba-e158c4e554e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.912886 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913227 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913264 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913331 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913355 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.913396 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.914222 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.914243 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.914505 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0d9b22a-4baf-4947-bbba-e158c4e554e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.917460 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0d9b22a-4baf-4947-bbba-e158c4e554e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.917520 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.917709 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.922655 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0d9b22a-4baf-4947-bbba-e158c4e554e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.927696 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psg7s\" (UniqueName: \"kubernetes.io/projected/c0d9b22a-4baf-4947-bbba-e158c4e554e5-kube-api-access-psg7s\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:34 crc kubenswrapper[4765]: I1203 20:58:34.949929 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0d9b22a-4baf-4947-bbba-e158c4e554e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:35 crc kubenswrapper[4765]: I1203 20:58:35.065691 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:58:35 crc kubenswrapper[4765]: I1203 20:58:35.527695 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 20:58:35 crc kubenswrapper[4765]: W1203 20:58:35.531202 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0d9b22a_4baf_4947_bbba_e158c4e554e5.slice/crio-1982792f5ca796d0d691a65f60879107bff162a9da58658137cdf1f9ae06ba4d WatchSource:0}: Error finding container 1982792f5ca796d0d691a65f60879107bff162a9da58658137cdf1f9ae06ba4d: Status 404 returned error can't find the container with id 1982792f5ca796d0d691a65f60879107bff162a9da58658137cdf1f9ae06ba4d Dec 03 20:58:35 crc kubenswrapper[4765]: I1203 20:58:35.680754 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be5953f-7d37-4d82-8ea7-3cff10d763c1","Type":"ContainerStarted","Data":"59411bca515e309c1997d51cfdaee2be7420fa5660d74acd6c38b0b488151a39"} Dec 03 20:58:35 crc kubenswrapper[4765]: I1203 20:58:35.685184 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0d9b22a-4baf-4947-bbba-e158c4e554e5","Type":"ContainerStarted","Data":"1982792f5ca796d0d691a65f60879107bff162a9da58658137cdf1f9ae06ba4d"} Dec 03 20:58:36 crc kubenswrapper[4765]: I1203 20:58:36.376265 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fed9c9a-215a-4bd8-9381-6c20099e434d" path="/var/lib/kubelet/pods/1fed9c9a-215a-4bd8-9381-6c20099e434d/volumes" Dec 03 20:58:37 crc kubenswrapper[4765]: I1203 20:58:37.708788 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0d9b22a-4baf-4947-bbba-e158c4e554e5","Type":"ContainerStarted","Data":"d93a0f8c69bc734deb27bd9497d9e69d3b6d80d6160c06cd6ae5f2cb1e53ce98"} Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.292036 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zsswm"] Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.294014 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.297122 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.303394 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zsswm"] Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375336 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzhk\" (UniqueName: \"kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375468 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375493 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375635 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.375774 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.420019 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zsswm"] Dec 03 20:58:38 crc kubenswrapper[4765]: E1203 20:58:38.420715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-lvzhk openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-578b8d767c-zsswm" podUID="eb334f85-433c-44c9-a103-28af737d4dda" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.452348 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.469183 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481473 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481534 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzhk\" (UniqueName: \"kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.481640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.482767 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.505923 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.511411 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.515983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.516088 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.526055 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzhk\" (UniqueName: \"kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk\") pod \"dnsmasq-dns-578b8d767c-zsswm\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.533360 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689238 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689304 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689332 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689377 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689396 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxzl\" (UniqueName: \"kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.689412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.715669 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.723647 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791580 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791605 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791634 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791654 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxzl\" (UniqueName: \"kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.791679 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.793124 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.793353 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.793530 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.793552 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.794049 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.810348 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxzl\" (UniqueName: \"kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl\") pod \"dnsmasq-dns-fbc59fbb7-hrfl2\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.884470 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893182 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893253 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893307 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893376 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzhk\" (UniqueName: \"kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893421 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb\") pod \"eb334f85-433c-44c9-a103-28af737d4dda\" (UID: \"eb334f85-433c-44c9-a103-28af737d4dda\") " Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893628 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893718 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config" (OuterVolumeSpecName: "config") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893776 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.893949 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.894786 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.894824 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.894835 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.894844 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.895081 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:38 crc kubenswrapper[4765]: I1203 20:58:38.896856 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk" (OuterVolumeSpecName: "kube-api-access-lvzhk") pod "eb334f85-433c-44c9-a103-28af737d4dda" (UID: "eb334f85-433c-44c9-a103-28af737d4dda"). InnerVolumeSpecName "kube-api-access-lvzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:38.998571 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb334f85-433c-44c9-a103-28af737d4dda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:38.998611 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzhk\" (UniqueName: \"kubernetes.io/projected/eb334f85-433c-44c9-a103-28af737d4dda-kube-api-access-lvzhk\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.400311 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 20:58:39 crc kubenswrapper[4765]: W1203 20:58:39.403881 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfd9c4f_37d5_4f3e_a18b_8472892c49e3.slice/crio-c9598956d292d6e696b26aebc6ea24dd64b3673d75d1b81445a399f624e1c431 WatchSource:0}: Error finding container c9598956d292d6e696b26aebc6ea24dd64b3673d75d1b81445a399f624e1c431: Status 404 returned error can't find the container with id c9598956d292d6e696b26aebc6ea24dd64b3673d75d1b81445a399f624e1c431 Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.731165 4765 generic.go:334] "Generic (PLEG): container finished" podID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerID="c167b0dce1dffdf1342b374ce5265bae2a92610f87f4190cc688d52b7342f55e" exitCode=0 Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.731252 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zsswm" Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.731290 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" event={"ID":"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3","Type":"ContainerDied","Data":"c167b0dce1dffdf1342b374ce5265bae2a92610f87f4190cc688d52b7342f55e"} Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.731337 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" event={"ID":"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3","Type":"ContainerStarted","Data":"c9598956d292d6e696b26aebc6ea24dd64b3673d75d1b81445a399f624e1c431"} Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.973419 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zsswm"] Dec 03 20:58:39 crc kubenswrapper[4765]: I1203 20:58:39.986237 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zsswm"] Dec 03 20:58:40 crc kubenswrapper[4765]: I1203 20:58:40.375191 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb334f85-433c-44c9-a103-28af737d4dda" path="/var/lib/kubelet/pods/eb334f85-433c-44c9-a103-28af737d4dda/volumes" Dec 03 20:58:40 crc kubenswrapper[4765]: I1203 20:58:40.744707 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" event={"ID":"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3","Type":"ContainerStarted","Data":"875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab"} Dec 03 20:58:40 crc kubenswrapper[4765]: I1203 20:58:40.744934 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:40 crc kubenswrapper[4765]: I1203 20:58:40.770914 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" podStartSLOduration=2.7708967920000003 podStartE2EDuration="2.770896792s" podCreationTimestamp="2025-12-03 20:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:58:40.764037855 +0000 UTC m=+1218.694583046" watchObservedRunningTime="2025-12-03 20:58:40.770896792 +0000 UTC m=+1218.701441943" Dec 03 20:58:48 crc kubenswrapper[4765]: I1203 20:58:48.886638 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 20:58:48 crc kubenswrapper[4765]: I1203 20:58:48.962713 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:58:48 crc kubenswrapper[4765]: I1203 20:58:48.963010 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="dnsmasq-dns" containerID="cri-o://64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f" gracePeriod=10 Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.452667 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.528784 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config\") pod \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.528931 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb\") pod \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.528977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc\") pod \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.529083 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb\") pod \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.529119 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9fgq\" (UniqueName: \"kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq\") pod \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\" (UID: \"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3\") " Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.536329 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq" (OuterVolumeSpecName: "kube-api-access-b9fgq") pod "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" (UID: "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3"). InnerVolumeSpecName "kube-api-access-b9fgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.583764 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config" (OuterVolumeSpecName: "config") pod "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" (UID: "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.591667 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" (UID: "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.593567 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" (UID: "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.596750 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" (UID: "1bff71bc-d11b-4a8b-89bf-6409e1eed4a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.632742 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.632779 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9fgq\" (UniqueName: \"kubernetes.io/projected/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-kube-api-access-b9fgq\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.632790 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-config\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.632800 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.632808 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.849018 4765 generic.go:334] "Generic (PLEG): container finished" podID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerID="64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f" exitCode=0 Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.849080 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.849073 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" event={"ID":"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3","Type":"ContainerDied","Data":"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f"} Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.849215 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-928b2" event={"ID":"1bff71bc-d11b-4a8b-89bf-6409e1eed4a3","Type":"ContainerDied","Data":"4f2b8d3fca0c15c322776800ee145ae79a86802a886e860392170a32ef2845bd"} Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.849233 4765 scope.go:117] "RemoveContainer" containerID="64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.887867 4765 scope.go:117] "RemoveContainer" containerID="00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.890243 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.898488 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-928b2"] Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.907702 4765 scope.go:117] "RemoveContainer" containerID="64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f" Dec 03 20:58:49 crc kubenswrapper[4765]: E1203 20:58:49.908327 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f\": container with ID starting with 64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f not found: ID does not exist" containerID="64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.908384 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f"} err="failed to get container status \"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f\": rpc error: code = NotFound desc = could not find container \"64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f\": container with ID starting with 64cfd1296947fc9436366d554b5cf9c952d87b883afe5769d79e538f7aa2882f not found: ID does not exist" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.908416 4765 scope.go:117] "RemoveContainer" containerID="00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6" Dec 03 20:58:49 crc kubenswrapper[4765]: E1203 20:58:49.908843 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6\": container with ID starting with 00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6 not found: ID does not exist" containerID="00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6" Dec 03 20:58:49 crc kubenswrapper[4765]: I1203 20:58:49.909168 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6"} err="failed to get container status \"00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6\": rpc error: code = NotFound desc = could not find container \"00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6\": container with ID starting with 00f1b92cef007a3d31e4161f1f053c68b5b1de1d3661b2d776ce8cc88a4fdee6 not found: ID does not exist" Dec 03 20:58:50 crc kubenswrapper[4765]: I1203 20:58:50.370393 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" path="/var/lib/kubelet/pods/1bff71bc-d11b-4a8b-89bf-6409e1eed4a3/volumes" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.612348 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln"] Dec 03 20:58:54 crc kubenswrapper[4765]: E1203 20:58:54.613426 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="dnsmasq-dns" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.613446 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="dnsmasq-dns" Dec 03 20:58:54 crc kubenswrapper[4765]: E1203 20:58:54.613465 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="init" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.613473 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="init" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.613696 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bff71bc-d11b-4a8b-89bf-6409e1eed4a3" containerName="dnsmasq-dns" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.614455 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.622065 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.622276 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.622505 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.622642 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.623473 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln"] Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.730597 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.730672 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.730720 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7pj\" (UniqueName: \"kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.730748 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.798504 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.798594 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.831871 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz7pj\" (UniqueName: \"kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.831917 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.832030 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.832060 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.839953 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.840873 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.846988 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.857525 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz7pj\" (UniqueName: \"kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:54 crc kubenswrapper[4765]: I1203 20:58:54.951716 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:58:55 crc kubenswrapper[4765]: I1203 20:58:55.495024 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln"] Dec 03 20:58:55 crc kubenswrapper[4765]: I1203 20:58:55.502043 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 20:58:55 crc kubenswrapper[4765]: I1203 20:58:55.909133 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" event={"ID":"35f0819b-6af9-4004-a8f4-ecb6f7eeb535","Type":"ContainerStarted","Data":"21398f83422bb40513a12e5352b5a1ffbcf9f4fb42578949864d01b94a98ba5b"} Dec 03 20:59:06 crc kubenswrapper[4765]: I1203 20:59:06.019913 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" event={"ID":"35f0819b-6af9-4004-a8f4-ecb6f7eeb535","Type":"ContainerStarted","Data":"c2075be5eba25c8076017253b086c4011349b8217676b8025d2271fd8e2c16c2"} Dec 03 20:59:06 crc kubenswrapper[4765]: I1203 20:59:06.063961 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" podStartSLOduration=2.624789372 podStartE2EDuration="12.063875647s" podCreationTimestamp="2025-12-03 20:58:54 +0000 UTC" firstStartedPulling="2025-12-03 20:58:55.501650741 +0000 UTC m=+1233.432195912" lastFinishedPulling="2025-12-03 20:59:04.940736986 +0000 UTC m=+1242.871282187" observedRunningTime="2025-12-03 20:59:06.046188968 +0000 UTC m=+1243.976734159" watchObservedRunningTime="2025-12-03 20:59:06.063875647 +0000 UTC m=+1243.994420838" Dec 03 20:59:09 crc kubenswrapper[4765]: I1203 20:59:09.050755 4765 generic.go:334] "Generic (PLEG): container finished" podID="0be5953f-7d37-4d82-8ea7-3cff10d763c1" containerID="59411bca515e309c1997d51cfdaee2be7420fa5660d74acd6c38b0b488151a39" exitCode=0 Dec 03 20:59:09 crc kubenswrapper[4765]: I1203 20:59:09.050820 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be5953f-7d37-4d82-8ea7-3cff10d763c1","Type":"ContainerDied","Data":"59411bca515e309c1997d51cfdaee2be7420fa5660d74acd6c38b0b488151a39"} Dec 03 20:59:10 crc kubenswrapper[4765]: I1203 20:59:10.062448 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0be5953f-7d37-4d82-8ea7-3cff10d763c1","Type":"ContainerStarted","Data":"fed9a4908d0556235ff8f33e34359a80e2c90c8d3e6f3bbb490ffef3b37260d7"} Dec 03 20:59:10 crc kubenswrapper[4765]: I1203 20:59:10.063077 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 20:59:10 crc kubenswrapper[4765]: I1203 20:59:10.064644 4765 generic.go:334] "Generic (PLEG): container finished" podID="c0d9b22a-4baf-4947-bbba-e158c4e554e5" containerID="d93a0f8c69bc734deb27bd9497d9e69d3b6d80d6160c06cd6ae5f2cb1e53ce98" exitCode=0 Dec 03 20:59:10 crc kubenswrapper[4765]: I1203 20:59:10.064672 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0d9b22a-4baf-4947-bbba-e158c4e554e5","Type":"ContainerDied","Data":"d93a0f8c69bc734deb27bd9497d9e69d3b6d80d6160c06cd6ae5f2cb1e53ce98"} Dec 03 20:59:10 crc kubenswrapper[4765]: I1203 20:59:10.105400 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.105380746 podStartE2EDuration="38.105380746s" podCreationTimestamp="2025-12-03 20:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:59:10.098769488 +0000 UTC m=+1248.029314639" watchObservedRunningTime="2025-12-03 20:59:10.105380746 +0000 UTC m=+1248.035925897" Dec 03 20:59:11 crc kubenswrapper[4765]: I1203 20:59:11.075375 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0d9b22a-4baf-4947-bbba-e158c4e554e5","Type":"ContainerStarted","Data":"5f94a1225f145bd1a06026247a883e454a811f3031c4b812ad9e21a501e26aa9"} Dec 03 20:59:11 crc kubenswrapper[4765]: I1203 20:59:11.076707 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:59:11 crc kubenswrapper[4765]: I1203 20:59:11.112390 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.112360389 podStartE2EDuration="37.112360389s" podCreationTimestamp="2025-12-03 20:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 20:59:11.098219706 +0000 UTC m=+1249.028764857" watchObservedRunningTime="2025-12-03 20:59:11.112360389 +0000 UTC m=+1249.042905570" Dec 03 20:59:17 crc kubenswrapper[4765]: I1203 20:59:17.139368 4765 generic.go:334] "Generic (PLEG): container finished" podID="35f0819b-6af9-4004-a8f4-ecb6f7eeb535" containerID="c2075be5eba25c8076017253b086c4011349b8217676b8025d2271fd8e2c16c2" exitCode=0 Dec 03 20:59:17 crc kubenswrapper[4765]: I1203 20:59:17.139819 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" event={"ID":"35f0819b-6af9-4004-a8f4-ecb6f7eeb535","Type":"ContainerDied","Data":"c2075be5eba25c8076017253b086c4011349b8217676b8025d2271fd8e2c16c2"} Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.518087 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.621405 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle\") pod \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.622266 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory\") pod \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.623271 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz7pj\" (UniqueName: \"kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj\") pod \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.623391 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key\") pod \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\" (UID: \"35f0819b-6af9-4004-a8f4-ecb6f7eeb535\") " Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.628880 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj" (OuterVolumeSpecName: "kube-api-access-pz7pj") pod "35f0819b-6af9-4004-a8f4-ecb6f7eeb535" (UID: "35f0819b-6af9-4004-a8f4-ecb6f7eeb535"). InnerVolumeSpecName "kube-api-access-pz7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.628925 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "35f0819b-6af9-4004-a8f4-ecb6f7eeb535" (UID: "35f0819b-6af9-4004-a8f4-ecb6f7eeb535"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.652319 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory" (OuterVolumeSpecName: "inventory") pod "35f0819b-6af9-4004-a8f4-ecb6f7eeb535" (UID: "35f0819b-6af9-4004-a8f4-ecb6f7eeb535"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.676263 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35f0819b-6af9-4004-a8f4-ecb6f7eeb535" (UID: "35f0819b-6af9-4004-a8f4-ecb6f7eeb535"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.726479 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.726514 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz7pj\" (UniqueName: \"kubernetes.io/projected/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-kube-api-access-pz7pj\") on node \"crc\" DevicePath \"\"" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.726525 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 20:59:18 crc kubenswrapper[4765]: I1203 20:59:18.726536 4765 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f0819b-6af9-4004-a8f4-ecb6f7eeb535-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.163125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" event={"ID":"35f0819b-6af9-4004-a8f4-ecb6f7eeb535","Type":"ContainerDied","Data":"21398f83422bb40513a12e5352b5a1ffbcf9f4fb42578949864d01b94a98ba5b"} Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.163180 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.163197 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21398f83422bb40513a12e5352b5a1ffbcf9f4fb42578949864d01b94a98ba5b" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.274267 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p"] Dec 03 20:59:19 crc kubenswrapper[4765]: E1203 20:59:19.277945 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f0819b-6af9-4004-a8f4-ecb6f7eeb535" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.277972 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f0819b-6af9-4004-a8f4-ecb6f7eeb535" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.278417 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f0819b-6af9-4004-a8f4-ecb6f7eeb535" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.279702 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.284475 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.284611 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.284720 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.286065 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.288815 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p"] Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.339421 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kp2l\" (UniqueName: \"kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.339466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.339548 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.339594 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.440438 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.440887 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kp2l\" (UniqueName: \"kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.440914 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.441007 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.444169 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.448804 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.454577 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.456977 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kp2l\" (UniqueName: \"kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:19 crc kubenswrapper[4765]: I1203 20:59:19.609951 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 20:59:20 crc kubenswrapper[4765]: I1203 20:59:20.240504 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p"] Dec 03 20:59:21 crc kubenswrapper[4765]: I1203 20:59:21.191747 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" event={"ID":"89a7a88b-9262-4c20-922e-89aa3d551eff","Type":"ContainerStarted","Data":"6700fa303c83eb1605307888580ca25c6fa9918f8c85ad4c34180d1ac296aaa8"} Dec 03 20:59:21 crc kubenswrapper[4765]: I1203 20:59:21.192166 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" event={"ID":"89a7a88b-9262-4c20-922e-89aa3d551eff","Type":"ContainerStarted","Data":"318539ba4a281968923cec4b67e8e2e0aeb3fbc718a04f11b7788e0f7685fa66"} Dec 03 20:59:21 crc kubenswrapper[4765]: I1203 20:59:21.220793 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" podStartSLOduration=1.790496615 podStartE2EDuration="2.220768901s" podCreationTimestamp="2025-12-03 20:59:19 +0000 UTC" firstStartedPulling="2025-12-03 20:59:20.252870398 +0000 UTC m=+1258.183415549" lastFinishedPulling="2025-12-03 20:59:20.683142664 +0000 UTC m=+1258.613687835" observedRunningTime="2025-12-03 20:59:21.218835509 +0000 UTC m=+1259.149380710" watchObservedRunningTime="2025-12-03 20:59:21.220768901 +0000 UTC m=+1259.151314052" Dec 03 20:59:23 crc kubenswrapper[4765]: I1203 20:59:23.351785 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 20:59:24 crc kubenswrapper[4765]: I1203 20:59:24.798904 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 20:59:24 crc kubenswrapper[4765]: I1203 20:59:24.799667 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 20:59:24 crc kubenswrapper[4765]: I1203 20:59:24.799732 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 20:59:24 crc kubenswrapper[4765]: I1203 20:59:24.800580 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 20:59:24 crc kubenswrapper[4765]: I1203 20:59:24.800659 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1" gracePeriod=600 Dec 03 20:59:25 crc kubenswrapper[4765]: I1203 20:59:25.070729 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 20:59:25 crc kubenswrapper[4765]: I1203 20:59:25.274912 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1" exitCode=0 Dec 03 20:59:25 crc kubenswrapper[4765]: I1203 20:59:25.275280 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1"} Dec 03 20:59:25 crc kubenswrapper[4765]: I1203 20:59:25.275342 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0"} Dec 03 20:59:25 crc kubenswrapper[4765]: I1203 20:59:25.275366 4765 scope.go:117] "RemoveContainer" containerID="d266b170dcf90c0708b0665cd61a7d72698207d468421a1880d76491e1e67a93" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.158015 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l"] Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.161121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.164506 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.164699 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.169466 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l"] Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.313369 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.313440 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2fd\" (UniqueName: \"kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.313477 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.414906 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.414987 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk2fd\" (UniqueName: \"kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.415047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.415873 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.424342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.451399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk2fd\" (UniqueName: \"kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd\") pod \"collect-profiles-29413260-9c88l\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:00 crc kubenswrapper[4765]: I1203 21:00:00.526984 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:01 crc kubenswrapper[4765]: I1203 21:00:01.003482 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l"] Dec 03 21:00:01 crc kubenswrapper[4765]: I1203 21:00:01.660553 4765 generic.go:334] "Generic (PLEG): container finished" podID="87495622-7d1f-48a8-9007-40e58f936a08" containerID="bdb69c7d0fbd9e9562003d5075c4d09c82f675a5b19548248bbc9542eb763120" exitCode=0 Dec 03 21:00:01 crc kubenswrapper[4765]: I1203 21:00:01.660623 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" event={"ID":"87495622-7d1f-48a8-9007-40e58f936a08","Type":"ContainerDied","Data":"bdb69c7d0fbd9e9562003d5075c4d09c82f675a5b19548248bbc9542eb763120"} Dec 03 21:00:01 crc kubenswrapper[4765]: I1203 21:00:01.660941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" event={"ID":"87495622-7d1f-48a8-9007-40e58f936a08","Type":"ContainerStarted","Data":"686e05c5f6ae279e8f76d9d602c0d5c28e30e0a9de191f9a22231d187e7bdd63"} Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.009882 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.176356 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk2fd\" (UniqueName: \"kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd\") pod \"87495622-7d1f-48a8-9007-40e58f936a08\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.176429 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume\") pod \"87495622-7d1f-48a8-9007-40e58f936a08\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.176519 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume\") pod \"87495622-7d1f-48a8-9007-40e58f936a08\" (UID: \"87495622-7d1f-48a8-9007-40e58f936a08\") " Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.177820 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume" (OuterVolumeSpecName: "config-volume") pod "87495622-7d1f-48a8-9007-40e58f936a08" (UID: "87495622-7d1f-48a8-9007-40e58f936a08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.185313 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87495622-7d1f-48a8-9007-40e58f936a08" (UID: "87495622-7d1f-48a8-9007-40e58f936a08"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.185363 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd" (OuterVolumeSpecName: "kube-api-access-jk2fd") pod "87495622-7d1f-48a8-9007-40e58f936a08" (UID: "87495622-7d1f-48a8-9007-40e58f936a08"). InnerVolumeSpecName "kube-api-access-jk2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.278429 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk2fd\" (UniqueName: \"kubernetes.io/projected/87495622-7d1f-48a8-9007-40e58f936a08-kube-api-access-jk2fd\") on node \"crc\" DevicePath \"\"" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.278464 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87495622-7d1f-48a8-9007-40e58f936a08-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.278473 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87495622-7d1f-48a8-9007-40e58f936a08-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.678066 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" event={"ID":"87495622-7d1f-48a8-9007-40e58f936a08","Type":"ContainerDied","Data":"686e05c5f6ae279e8f76d9d602c0d5c28e30e0a9de191f9a22231d187e7bdd63"} Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.678103 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="686e05c5f6ae279e8f76d9d602c0d5c28e30e0a9de191f9a22231d187e7bdd63" Dec 03 21:00:03 crc kubenswrapper[4765]: I1203 21:00:03.678530 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l" Dec 03 21:00:27 crc kubenswrapper[4765]: I1203 21:00:27.006975 4765 scope.go:117] "RemoveContainer" containerID="544dc13376d19749c3b01f3871ab7c9287a388fad05317770a10500c950337a5" Dec 03 21:00:27 crc kubenswrapper[4765]: I1203 21:00:27.052263 4765 scope.go:117] "RemoveContainer" containerID="4adda8522a70a47d37db5d1f4816fda7d06778fa3cf88a2519c43041d335f1c2" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.176494 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29413261-sllgz"] Dec 03 21:01:00 crc kubenswrapper[4765]: E1203 21:01:00.177674 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87495622-7d1f-48a8-9007-40e58f936a08" containerName="collect-profiles" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.177698 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="87495622-7d1f-48a8-9007-40e58f936a08" containerName="collect-profiles" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.178045 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="87495622-7d1f-48a8-9007-40e58f936a08" containerName="collect-profiles" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.179070 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.187959 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413261-sllgz"] Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.304437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.304585 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.304615 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjbs\" (UniqueName: \"kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.304637 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.406868 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.406941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.406962 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjbs\" (UniqueName: \"kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.406981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.413157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.413607 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.414399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.426437 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjbs\" (UniqueName: \"kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs\") pod \"keystone-cron-29413261-sllgz\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.513554 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:00 crc kubenswrapper[4765]: I1203 21:01:00.989566 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413261-sllgz"] Dec 03 21:01:01 crc kubenswrapper[4765]: I1203 21:01:01.283508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413261-sllgz" event={"ID":"d52a513d-e85f-4c95-9188-8748e9f08c2b","Type":"ContainerStarted","Data":"2b348d2bf7b3065cfff7614e4524695e3fb2852be2de87cd4207452994e51076"} Dec 03 21:01:02 crc kubenswrapper[4765]: I1203 21:01:02.296985 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413261-sllgz" event={"ID":"d52a513d-e85f-4c95-9188-8748e9f08c2b","Type":"ContainerStarted","Data":"5bd730b44f83bc7bf116f80623fe36ef2a37929c7e30bef9bc1b065cc114702c"} Dec 03 21:01:02 crc kubenswrapper[4765]: I1203 21:01:02.326531 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29413261-sllgz" podStartSLOduration=2.326510992 podStartE2EDuration="2.326510992s" podCreationTimestamp="2025-12-03 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:01:02.324708924 +0000 UTC m=+1360.255254085" watchObservedRunningTime="2025-12-03 21:01:02.326510992 +0000 UTC m=+1360.257056153" Dec 03 21:01:04 crc kubenswrapper[4765]: I1203 21:01:04.339350 4765 generic.go:334] "Generic (PLEG): container finished" podID="d52a513d-e85f-4c95-9188-8748e9f08c2b" containerID="5bd730b44f83bc7bf116f80623fe36ef2a37929c7e30bef9bc1b065cc114702c" exitCode=0 Dec 03 21:01:04 crc kubenswrapper[4765]: I1203 21:01:04.339740 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413261-sllgz" event={"ID":"d52a513d-e85f-4c95-9188-8748e9f08c2b","Type":"ContainerDied","Data":"5bd730b44f83bc7bf116f80623fe36ef2a37929c7e30bef9bc1b065cc114702c"} Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.709178 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.825172 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data\") pod \"d52a513d-e85f-4c95-9188-8748e9f08c2b\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.825365 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle\") pod \"d52a513d-e85f-4c95-9188-8748e9f08c2b\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.825415 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys\") pod \"d52a513d-e85f-4c95-9188-8748e9f08c2b\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.825465 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjbs\" (UniqueName: \"kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs\") pod \"d52a513d-e85f-4c95-9188-8748e9f08c2b\" (UID: \"d52a513d-e85f-4c95-9188-8748e9f08c2b\") " Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.832417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d52a513d-e85f-4c95-9188-8748e9f08c2b" (UID: "d52a513d-e85f-4c95-9188-8748e9f08c2b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.833439 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs" (OuterVolumeSpecName: "kube-api-access-5bjbs") pod "d52a513d-e85f-4c95-9188-8748e9f08c2b" (UID: "d52a513d-e85f-4c95-9188-8748e9f08c2b"). InnerVolumeSpecName "kube-api-access-5bjbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.884417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d52a513d-e85f-4c95-9188-8748e9f08c2b" (UID: "d52a513d-e85f-4c95-9188-8748e9f08c2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.913172 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data" (OuterVolumeSpecName: "config-data") pod "d52a513d-e85f-4c95-9188-8748e9f08c2b" (UID: "d52a513d-e85f-4c95-9188-8748e9f08c2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.928686 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.928763 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.928794 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjbs\" (UniqueName: \"kubernetes.io/projected/d52a513d-e85f-4c95-9188-8748e9f08c2b-kube-api-access-5bjbs\") on node \"crc\" DevicePath \"\"" Dec 03 21:01:05 crc kubenswrapper[4765]: I1203 21:01:05.928818 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52a513d-e85f-4c95-9188-8748e9f08c2b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:01:06 crc kubenswrapper[4765]: I1203 21:01:06.364214 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413261-sllgz" Dec 03 21:01:06 crc kubenswrapper[4765]: I1203 21:01:06.407903 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413261-sllgz" event={"ID":"d52a513d-e85f-4c95-9188-8748e9f08c2b","Type":"ContainerDied","Data":"2b348d2bf7b3065cfff7614e4524695e3fb2852be2de87cd4207452994e51076"} Dec 03 21:01:06 crc kubenswrapper[4765]: I1203 21:01:06.407960 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b348d2bf7b3065cfff7614e4524695e3fb2852be2de87cd4207452994e51076" Dec 03 21:01:27 crc kubenswrapper[4765]: I1203 21:01:27.150890 4765 scope.go:117] "RemoveContainer" containerID="fff205ef81863b7053c6630f280650dfa7adf09912225c268437151f120b3aaf" Dec 03 21:01:27 crc kubenswrapper[4765]: I1203 21:01:27.213148 4765 scope.go:117] "RemoveContainer" containerID="63cbf044741bdea1536246541c6615f5d238a3bd84f19ca23ae9fb8be94ec921" Dec 03 21:01:27 crc kubenswrapper[4765]: I1203 21:01:27.277040 4765 scope.go:117] "RemoveContainer" containerID="07c250f6f51efa9e77a7e95e42c3f1560fcff00b5552568d710f0203ab56f2b8" Dec 03 21:01:54 crc kubenswrapper[4765]: I1203 21:01:54.798925 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:01:54 crc kubenswrapper[4765]: I1203 21:01:54.799756 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:02:24 crc kubenswrapper[4765]: I1203 21:02:24.798993 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:02:24 crc kubenswrapper[4765]: I1203 21:02:24.799814 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:02:27 crc kubenswrapper[4765]: I1203 21:02:27.398955 4765 scope.go:117] "RemoveContainer" containerID="538030bcbc76a23791f1b760308be19bda72fd17241427b2eba81c0586719b41" Dec 03 21:02:27 crc kubenswrapper[4765]: I1203 21:02:27.432158 4765 scope.go:117] "RemoveContainer" containerID="2cd89c5ed7d221a041e2723fd51a7215fa1ce2050e692016d0630e8132058798" Dec 03 21:02:35 crc kubenswrapper[4765]: I1203 21:02:35.398736 4765 generic.go:334] "Generic (PLEG): container finished" podID="89a7a88b-9262-4c20-922e-89aa3d551eff" containerID="6700fa303c83eb1605307888580ca25c6fa9918f8c85ad4c34180d1ac296aaa8" exitCode=0 Dec 03 21:02:35 crc kubenswrapper[4765]: I1203 21:02:35.399221 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" event={"ID":"89a7a88b-9262-4c20-922e-89aa3d551eff","Type":"ContainerDied","Data":"6700fa303c83eb1605307888580ca25c6fa9918f8c85ad4c34180d1ac296aaa8"} Dec 03 21:02:36 crc kubenswrapper[4765]: I1203 21:02:36.823584 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.000863 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle\") pod \"89a7a88b-9262-4c20-922e-89aa3d551eff\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.001546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kp2l\" (UniqueName: \"kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l\") pod \"89a7a88b-9262-4c20-922e-89aa3d551eff\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.001729 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory\") pod \"89a7a88b-9262-4c20-922e-89aa3d551eff\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.001767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key\") pod \"89a7a88b-9262-4c20-922e-89aa3d551eff\" (UID: \"89a7a88b-9262-4c20-922e-89aa3d551eff\") " Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.007096 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "89a7a88b-9262-4c20-922e-89aa3d551eff" (UID: "89a7a88b-9262-4c20-922e-89aa3d551eff"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.007373 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l" (OuterVolumeSpecName: "kube-api-access-8kp2l") pod "89a7a88b-9262-4c20-922e-89aa3d551eff" (UID: "89a7a88b-9262-4c20-922e-89aa3d551eff"). InnerVolumeSpecName "kube-api-access-8kp2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.030157 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89a7a88b-9262-4c20-922e-89aa3d551eff" (UID: "89a7a88b-9262-4c20-922e-89aa3d551eff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.031172 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory" (OuterVolumeSpecName: "inventory") pod "89a7a88b-9262-4c20-922e-89aa3d551eff" (UID: "89a7a88b-9262-4c20-922e-89aa3d551eff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.103538 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.103569 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.103579 4765 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7a88b-9262-4c20-922e-89aa3d551eff-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.103591 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kp2l\" (UniqueName: \"kubernetes.io/projected/89a7a88b-9262-4c20-922e-89aa3d551eff-kube-api-access-8kp2l\") on node \"crc\" DevicePath \"\"" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.444135 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" event={"ID":"89a7a88b-9262-4c20-922e-89aa3d551eff","Type":"ContainerDied","Data":"318539ba4a281968923cec4b67e8e2e0aeb3fbc718a04f11b7788e0f7685fa66"} Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.444510 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="318539ba4a281968923cec4b67e8e2e0aeb3fbc718a04f11b7788e0f7685fa66" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.444806 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.532796 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl"] Dec 03 21:02:37 crc kubenswrapper[4765]: E1203 21:02:37.533463 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52a513d-e85f-4c95-9188-8748e9f08c2b" containerName="keystone-cron" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.533484 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52a513d-e85f-4c95-9188-8748e9f08c2b" containerName="keystone-cron" Dec 03 21:02:37 crc kubenswrapper[4765]: E1203 21:02:37.533496 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a7a88b-9262-4c20-922e-89aa3d551eff" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.533507 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a7a88b-9262-4c20-922e-89aa3d551eff" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.534734 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a7a88b-9262-4c20-922e-89aa3d551eff" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.534782 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52a513d-e85f-4c95-9188-8748e9f08c2b" containerName="keystone-cron" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.535704 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.541879 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.542146 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.542616 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.542770 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.546219 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl"] Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.620463 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24sqs\" (UniqueName: \"kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.620863 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.621060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.723377 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24sqs\" (UniqueName: \"kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.723448 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.723529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.727944 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.739963 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.751156 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24sqs\" (UniqueName: \"kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:37 crc kubenswrapper[4765]: I1203 21:02:37.853055 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:02:38 crc kubenswrapper[4765]: I1203 21:02:38.408676 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl"] Dec 03 21:02:38 crc kubenswrapper[4765]: I1203 21:02:38.454347 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" event={"ID":"4c935ca3-bb76-490b-b05d-47f3d91136cb","Type":"ContainerStarted","Data":"419b6d8a7c7d9a9898331c63ad9af2ba05827ccd1853c51450b7469d958a84a1"} Dec 03 21:02:39 crc kubenswrapper[4765]: I1203 21:02:39.469141 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" event={"ID":"4c935ca3-bb76-490b-b05d-47f3d91136cb","Type":"ContainerStarted","Data":"7e7c228df01cd5d7a1471aa2ddcf59079b45071c862cb9a22415431172750bf3"} Dec 03 21:02:39 crc kubenswrapper[4765]: I1203 21:02:39.496425 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" podStartSLOduration=2.084157208 podStartE2EDuration="2.496408943s" podCreationTimestamp="2025-12-03 21:02:37 +0000 UTC" firstStartedPulling="2025-12-03 21:02:38.417092027 +0000 UTC m=+1456.347637218" lastFinishedPulling="2025-12-03 21:02:38.829343782 +0000 UTC m=+1456.759888953" observedRunningTime="2025-12-03 21:02:39.491049336 +0000 UTC m=+1457.421594487" watchObservedRunningTime="2025-12-03 21:02:39.496408943 +0000 UTC m=+1457.426954094" Dec 03 21:02:54 crc kubenswrapper[4765]: I1203 21:02:54.798584 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:02:54 crc kubenswrapper[4765]: I1203 21:02:54.799293 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:02:54 crc kubenswrapper[4765]: I1203 21:02:54.799444 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:02:54 crc kubenswrapper[4765]: I1203 21:02:54.801276 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:02:54 crc kubenswrapper[4765]: I1203 21:02:54.801434 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0" gracePeriod=600 Dec 03 21:02:55 crc kubenswrapper[4765]: I1203 21:02:55.681538 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0" exitCode=0 Dec 03 21:02:55 crc kubenswrapper[4765]: I1203 21:02:55.681640 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0"} Dec 03 21:02:55 crc kubenswrapper[4765]: I1203 21:02:55.682046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae"} Dec 03 21:02:55 crc kubenswrapper[4765]: I1203 21:02:55.682069 4765 scope.go:117] "RemoveContainer" containerID="6aa749a3d52e7027f4c1b57e3c92a047ee77ee6c07dbbdf89a660a9fce0275e1" Dec 03 21:03:27 crc kubenswrapper[4765]: I1203 21:03:27.515570 4765 scope.go:117] "RemoveContainer" containerID="4a86832328b2d1eb9bda7f8d184218bd45daca5d49c383c3c4d2106c3014b09c" Dec 03 21:03:51 crc kubenswrapper[4765]: I1203 21:03:51.230791 4765 generic.go:334] "Generic (PLEG): container finished" podID="4c935ca3-bb76-490b-b05d-47f3d91136cb" containerID="7e7c228df01cd5d7a1471aa2ddcf59079b45071c862cb9a22415431172750bf3" exitCode=0 Dec 03 21:03:51 crc kubenswrapper[4765]: I1203 21:03:51.230843 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" event={"ID":"4c935ca3-bb76-490b-b05d-47f3d91136cb","Type":"ContainerDied","Data":"7e7c228df01cd5d7a1471aa2ddcf59079b45071c862cb9a22415431172750bf3"} Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.639586 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.792316 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory\") pod \"4c935ca3-bb76-490b-b05d-47f3d91136cb\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.792403 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24sqs\" (UniqueName: \"kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs\") pod \"4c935ca3-bb76-490b-b05d-47f3d91136cb\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.792484 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key\") pod \"4c935ca3-bb76-490b-b05d-47f3d91136cb\" (UID: \"4c935ca3-bb76-490b-b05d-47f3d91136cb\") " Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.797825 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs" (OuterVolumeSpecName: "kube-api-access-24sqs") pod "4c935ca3-bb76-490b-b05d-47f3d91136cb" (UID: "4c935ca3-bb76-490b-b05d-47f3d91136cb"). InnerVolumeSpecName "kube-api-access-24sqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.820533 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory" (OuterVolumeSpecName: "inventory") pod "4c935ca3-bb76-490b-b05d-47f3d91136cb" (UID: "4c935ca3-bb76-490b-b05d-47f3d91136cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.844853 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4c935ca3-bb76-490b-b05d-47f3d91136cb" (UID: "4c935ca3-bb76-490b-b05d-47f3d91136cb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.902704 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.902737 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24sqs\" (UniqueName: \"kubernetes.io/projected/4c935ca3-bb76-490b-b05d-47f3d91136cb-kube-api-access-24sqs\") on node \"crc\" DevicePath \"\"" Dec 03 21:03:52 crc kubenswrapper[4765]: I1203 21:03:52.902751 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c935ca3-bb76-490b-b05d-47f3d91136cb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.253229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" event={"ID":"4c935ca3-bb76-490b-b05d-47f3d91136cb","Type":"ContainerDied","Data":"419b6d8a7c7d9a9898331c63ad9af2ba05827ccd1853c51450b7469d958a84a1"} Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.253551 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419b6d8a7c7d9a9898331c63ad9af2ba05827ccd1853c51450b7469d958a84a1" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.253679 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.405843 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd"] Dec 03 21:03:53 crc kubenswrapper[4765]: E1203 21:03:53.406275 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c935ca3-bb76-490b-b05d-47f3d91136cb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.406311 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c935ca3-bb76-490b-b05d-47f3d91136cb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.406531 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c935ca3-bb76-490b-b05d-47f3d91136cb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.407274 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.409308 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.409316 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.409849 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.410152 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.410601 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.410770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4nb\" (UniqueName: \"kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.410827 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.416096 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd"] Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.512213 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.512353 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.512401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv4nb\" (UniqueName: \"kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.516678 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.516682 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.528684 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv4nb\" (UniqueName: \"kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:53 crc kubenswrapper[4765]: I1203 21:03:53.737020 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:03:54 crc kubenswrapper[4765]: I1203 21:03:54.253977 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd"] Dec 03 21:03:54 crc kubenswrapper[4765]: I1203 21:03:54.263717 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" event={"ID":"75f461fc-4404-425c-a66f-d06f2f31c027","Type":"ContainerStarted","Data":"a749d9ba4accf81d93347fc63e9e45af3da637559cb537c28d55dcf5a402f6bf"} Dec 03 21:03:55 crc kubenswrapper[4765]: I1203 21:03:55.273281 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" event={"ID":"75f461fc-4404-425c-a66f-d06f2f31c027","Type":"ContainerStarted","Data":"33d200e78c618970df8cf59f7454eed8051c111922e41d2eeb0b2e2b8f43c356"} Dec 03 21:03:55 crc kubenswrapper[4765]: I1203 21:03:55.295997 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" podStartSLOduration=1.849602014 podStartE2EDuration="2.295982274s" podCreationTimestamp="2025-12-03 21:03:53 +0000 UTC" firstStartedPulling="2025-12-03 21:03:54.25684117 +0000 UTC m=+1532.187386321" lastFinishedPulling="2025-12-03 21:03:54.70322143 +0000 UTC m=+1532.633766581" observedRunningTime="2025-12-03 21:03:55.289219508 +0000 UTC m=+1533.219764659" watchObservedRunningTime="2025-12-03 21:03:55.295982274 +0000 UTC m=+1533.226527415" Dec 03 21:04:02 crc kubenswrapper[4765]: I1203 21:04:02.345489 4765 generic.go:334] "Generic (PLEG): container finished" podID="75f461fc-4404-425c-a66f-d06f2f31c027" containerID="33d200e78c618970df8cf59f7454eed8051c111922e41d2eeb0b2e2b8f43c356" exitCode=0 Dec 03 21:04:02 crc kubenswrapper[4765]: I1203 21:04:02.345568 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" event={"ID":"75f461fc-4404-425c-a66f-d06f2f31c027","Type":"ContainerDied","Data":"33d200e78c618970df8cf59f7454eed8051c111922e41d2eeb0b2e2b8f43c356"} Dec 03 21:04:03 crc kubenswrapper[4765]: I1203 21:04:03.880844 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.052993 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory\") pod \"75f461fc-4404-425c-a66f-d06f2f31c027\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.053472 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv4nb\" (UniqueName: \"kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb\") pod \"75f461fc-4404-425c-a66f-d06f2f31c027\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.053538 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key\") pod \"75f461fc-4404-425c-a66f-d06f2f31c027\" (UID: \"75f461fc-4404-425c-a66f-d06f2f31c027\") " Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.060009 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb" (OuterVolumeSpecName: "kube-api-access-sv4nb") pod "75f461fc-4404-425c-a66f-d06f2f31c027" (UID: "75f461fc-4404-425c-a66f-d06f2f31c027"). InnerVolumeSpecName "kube-api-access-sv4nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.084990 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory" (OuterVolumeSpecName: "inventory") pod "75f461fc-4404-425c-a66f-d06f2f31c027" (UID: "75f461fc-4404-425c-a66f-d06f2f31c027"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.091570 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75f461fc-4404-425c-a66f-d06f2f31c027" (UID: "75f461fc-4404-425c-a66f-d06f2f31c027"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.156183 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv4nb\" (UniqueName: \"kubernetes.io/projected/75f461fc-4404-425c-a66f-d06f2f31c027-kube-api-access-sv4nb\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.156243 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.156260 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75f461fc-4404-425c-a66f-d06f2f31c027-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.378957 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.389962 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd" event={"ID":"75f461fc-4404-425c-a66f-d06f2f31c027","Type":"ContainerDied","Data":"a749d9ba4accf81d93347fc63e9e45af3da637559cb537c28d55dcf5a402f6bf"} Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.390011 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a749d9ba4accf81d93347fc63e9e45af3da637559cb537c28d55dcf5a402f6bf" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.455444 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4"] Dec 03 21:04:04 crc kubenswrapper[4765]: E1203 21:04:04.455836 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f461fc-4404-425c-a66f-d06f2f31c027" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.455849 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f461fc-4404-425c-a66f-d06f2f31c027" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.456003 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f461fc-4404-425c-a66f-d06f2f31c027" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.456615 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.459099 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.459487 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.460070 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.460437 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.470240 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4"] Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.563553 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.563720 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.563810 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lmbg\" (UniqueName: \"kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.665334 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.665861 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.666107 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lmbg\" (UniqueName: \"kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.671482 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.672920 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.684782 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lmbg\" (UniqueName: \"kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z4gw4\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:04 crc kubenswrapper[4765]: I1203 21:04:04.797449 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:05 crc kubenswrapper[4765]: I1203 21:04:05.325172 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:04:05 crc kubenswrapper[4765]: I1203 21:04:05.328108 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4"] Dec 03 21:04:05 crc kubenswrapper[4765]: I1203 21:04:05.390876 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" event={"ID":"9940e2fd-580e-4f33-99b6-5441ea17b717","Type":"ContainerStarted","Data":"7462efb005acae9e092c6347292e79b3a1647bbadce0fc5b844d0d9a064d7f17"} Dec 03 21:04:06 crc kubenswrapper[4765]: I1203 21:04:06.401130 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" event={"ID":"9940e2fd-580e-4f33-99b6-5441ea17b717","Type":"ContainerStarted","Data":"4b32dc937e49e91d34eddbc2059d144f4788e12e59d0edcec647f2923f6b361e"} Dec 03 21:04:06 crc kubenswrapper[4765]: I1203 21:04:06.425320 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" podStartSLOduration=1.8609088059999999 podStartE2EDuration="2.425284721s" podCreationTimestamp="2025-12-03 21:04:04 +0000 UTC" firstStartedPulling="2025-12-03 21:04:05.324850507 +0000 UTC m=+1543.255395658" lastFinishedPulling="2025-12-03 21:04:05.889226422 +0000 UTC m=+1543.819771573" observedRunningTime="2025-12-03 21:04:06.418587648 +0000 UTC m=+1544.349132799" watchObservedRunningTime="2025-12-03 21:04:06.425284721 +0000 UTC m=+1544.355829872" Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.040936 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8vsmb"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.049209 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7gs2x"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.056867 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-z8wmz"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.069671 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8vsmb"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.077924 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-z8wmz"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.085421 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7gs2x"] Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.371682 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3823c130-196f-4c3b-9028-301443274ef4" path="/var/lib/kubelet/pods/3823c130-196f-4c3b-9028-301443274ef4/volumes" Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.372258 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffedbc1-9f3a-46a0-9888-bb249ecc9670" path="/var/lib/kubelet/pods/dffedbc1-9f3a-46a0-9888-bb249ecc9670/volumes" Dec 03 21:04:14 crc kubenswrapper[4765]: I1203 21:04:14.372790 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccef2c9-a838-4fbf-a2b7-275ba5803488" path="/var/lib/kubelet/pods/fccef2c9-a838-4fbf-a2b7-275ba5803488/volumes" Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.037274 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3ba2-account-create-update-xx6xz"] Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.049120 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9506-account-create-update-95fmt"] Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.059255 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3ba2-account-create-update-xx6xz"] Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.068590 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e2f8-account-create-update-qxcsc"] Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.075858 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9506-account-create-update-95fmt"] Dec 03 21:04:15 crc kubenswrapper[4765]: I1203 21:04:15.082289 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e2f8-account-create-update-qxcsc"] Dec 03 21:04:16 crc kubenswrapper[4765]: I1203 21:04:16.376797 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9781efae-8a1d-4f26-ac4f-a6ca36af2d6e" path="/var/lib/kubelet/pods/9781efae-8a1d-4f26-ac4f-a6ca36af2d6e/volumes" Dec 03 21:04:16 crc kubenswrapper[4765]: I1203 21:04:16.377702 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba06d7a8-a247-4572-ae04-7e29248e3878" path="/var/lib/kubelet/pods/ba06d7a8-a247-4572-ae04-7e29248e3878/volumes" Dec 03 21:04:16 crc kubenswrapper[4765]: I1203 21:04:16.378361 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9120997-5314-4a15-87c9-1315f6adbef3" path="/var/lib/kubelet/pods/c9120997-5314-4a15-87c9-1315f6adbef3/volumes" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.571603 4765 scope.go:117] "RemoveContainer" containerID="1d1ab15d700120e2e0235f728ea9e89a43ddf35cbe9af8bf0cd08563672498d2" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.619289 4765 scope.go:117] "RemoveContainer" containerID="37b493717e010ef3584e8ef7c84ca53ac8a99279b339528c4c30594094e68d9a" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.664305 4765 scope.go:117] "RemoveContainer" containerID="8b13c33535ca3eee8601014d14da978b020799aa49fb4e4a3c7d7ee0b33133fb" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.703354 4765 scope.go:117] "RemoveContainer" containerID="2f7da036ac56feacc2579fe2f82068a6dd6c30e6ae622215f77b17863d964885" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.765606 4765 scope.go:117] "RemoveContainer" containerID="d97eec23e76d8f99d1f99220da1919b035a9a0623f3bde643a80187642f9a021" Dec 03 21:04:27 crc kubenswrapper[4765]: I1203 21:04:27.808294 4765 scope.go:117] "RemoveContainer" containerID="c97172b65dd25f1b607a42915a770d24ebced514ad8d6dd66db892355f9a7c40" Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.064474 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7dec-account-create-update-zdgw6"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.080094 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-q9s77"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.094847 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-f2xv7"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.105407 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7r84p"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.113065 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7c91-account-create-update-7cstw"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.119450 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7dec-account-create-update-zdgw6"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.125265 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7r84p"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.132561 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-f2xv7"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.140760 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7c91-account-create-update-7cstw"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.148416 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-q9s77"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.155215 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dd63-account-create-update-rkkhn"] Dec 03 21:04:45 crc kubenswrapper[4765]: I1203 21:04:45.162355 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dd63-account-create-update-rkkhn"] Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.375704 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675" path="/var/lib/kubelet/pods/0c0bdf8b-7579-4f6f-a3ec-f68ece3a6675/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.377856 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1959025f-e3a0-42a9-b4f7-c55151f34a91" path="/var/lib/kubelet/pods/1959025f-e3a0-42a9-b4f7-c55151f34a91/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.378475 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e37f2d0-b9ea-4e9a-8259-0be89e132c64" path="/var/lib/kubelet/pods/1e37f2d0-b9ea-4e9a-8259-0be89e132c64/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.379077 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e811ab-603b-493f-8df9-9e00de2b9cef" path="/var/lib/kubelet/pods/42e811ab-603b-493f-8df9-9e00de2b9cef/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.380574 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab27ead-a53a-41ff-8f57-7fd8e0da7e21" path="/var/lib/kubelet/pods/6ab27ead-a53a-41ff-8f57-7fd8e0da7e21/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.381157 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0c742d-3c3d-4020-89a8-53453beeab23" path="/var/lib/kubelet/pods/dc0c742d-3c3d-4020-89a8-53453beeab23/volumes" Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.780994 4765 generic.go:334] "Generic (PLEG): container finished" podID="9940e2fd-580e-4f33-99b6-5441ea17b717" containerID="4b32dc937e49e91d34eddbc2059d144f4788e12e59d0edcec647f2923f6b361e" exitCode=0 Dec 03 21:04:46 crc kubenswrapper[4765]: I1203 21:04:46.781094 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" event={"ID":"9940e2fd-580e-4f33-99b6-5441ea17b717","Type":"ContainerDied","Data":"4b32dc937e49e91d34eddbc2059d144f4788e12e59d0edcec647f2923f6b361e"} Dec 03 21:04:47 crc kubenswrapper[4765]: I1203 21:04:47.047616 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mllvt"] Dec 03 21:04:47 crc kubenswrapper[4765]: I1203 21:04:47.057121 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mllvt"] Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.235140 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.323105 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lmbg\" (UniqueName: \"kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg\") pod \"9940e2fd-580e-4f33-99b6-5441ea17b717\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.323373 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory\") pod \"9940e2fd-580e-4f33-99b6-5441ea17b717\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.323518 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key\") pod \"9940e2fd-580e-4f33-99b6-5441ea17b717\" (UID: \"9940e2fd-580e-4f33-99b6-5441ea17b717\") " Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.329448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg" (OuterVolumeSpecName: "kube-api-access-2lmbg") pod "9940e2fd-580e-4f33-99b6-5441ea17b717" (UID: "9940e2fd-580e-4f33-99b6-5441ea17b717"). InnerVolumeSpecName "kube-api-access-2lmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.349466 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory" (OuterVolumeSpecName: "inventory") pod "9940e2fd-580e-4f33-99b6-5441ea17b717" (UID: "9940e2fd-580e-4f33-99b6-5441ea17b717"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.356439 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9940e2fd-580e-4f33-99b6-5441ea17b717" (UID: "9940e2fd-580e-4f33-99b6-5441ea17b717"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.373054 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80088b6b-01d5-403b-b051-fd7defbee240" path="/var/lib/kubelet/pods/80088b6b-01d5-403b-b051-fd7defbee240/volumes" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.429855 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lmbg\" (UniqueName: \"kubernetes.io/projected/9940e2fd-580e-4f33-99b6-5441ea17b717-kube-api-access-2lmbg\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.430243 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.430262 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9940e2fd-580e-4f33-99b6-5441ea17b717-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.801998 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" event={"ID":"9940e2fd-580e-4f33-99b6-5441ea17b717","Type":"ContainerDied","Data":"7462efb005acae9e092c6347292e79b3a1647bbadce0fc5b844d0d9a064d7f17"} Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.802053 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7462efb005acae9e092c6347292e79b3a1647bbadce0fc5b844d0d9a064d7f17" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.802098 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.899881 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd"] Dec 03 21:04:48 crc kubenswrapper[4765]: E1203 21:04:48.900452 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9940e2fd-580e-4f33-99b6-5441ea17b717" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.900469 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9940e2fd-580e-4f33-99b6-5441ea17b717" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.900718 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9940e2fd-580e-4f33-99b6-5441ea17b717" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.901532 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.905965 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.906501 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.909824 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd"] Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.911804 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:04:48 crc kubenswrapper[4765]: I1203 21:04:48.911989 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.042097 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.042131 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.042181 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn6d\" (UniqueName: \"kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.143273 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.143373 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.143420 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dn6d\" (UniqueName: \"kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.147597 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.148396 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.166742 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dn6d\" (UniqueName: \"kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.224408 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.770950 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd"] Dec 03 21:04:49 crc kubenswrapper[4765]: W1203 21:04:49.778449 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e0694eb_01f7_42d1_bb82_aa0c84f1df92.slice/crio-64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f WatchSource:0}: Error finding container 64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f: Status 404 returned error can't find the container with id 64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f Dec 03 21:04:49 crc kubenswrapper[4765]: I1203 21:04:49.810296 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" event={"ID":"9e0694eb-01f7-42d1-bb82-aa0c84f1df92","Type":"ContainerStarted","Data":"64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f"} Dec 03 21:04:50 crc kubenswrapper[4765]: I1203 21:04:50.821638 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" event={"ID":"9e0694eb-01f7-42d1-bb82-aa0c84f1df92","Type":"ContainerStarted","Data":"a586b908aa604624dcecc2ff285e3675cd688e91280e4baa231f10d4a41a04f9"} Dec 03 21:04:50 crc kubenswrapper[4765]: I1203 21:04:50.841612 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" podStartSLOduration=2.170458423 podStartE2EDuration="2.841595903s" podCreationTimestamp="2025-12-03 21:04:48 +0000 UTC" firstStartedPulling="2025-12-03 21:04:49.780475637 +0000 UTC m=+1587.711020788" lastFinishedPulling="2025-12-03 21:04:50.451613077 +0000 UTC m=+1588.382158268" observedRunningTime="2025-12-03 21:04:50.839865997 +0000 UTC m=+1588.770411158" watchObservedRunningTime="2025-12-03 21:04:50.841595903 +0000 UTC m=+1588.772141054" Dec 03 21:04:51 crc kubenswrapper[4765]: I1203 21:04:51.052357 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-49dvf"] Dec 03 21:04:51 crc kubenswrapper[4765]: I1203 21:04:51.066269 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-49dvf"] Dec 03 21:04:52 crc kubenswrapper[4765]: I1203 21:04:52.370449 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b83af8-4f5b-405c-9961-3f37c37ee18b" path="/var/lib/kubelet/pods/b5b83af8-4f5b-405c-9961-3f37c37ee18b/volumes" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.728710 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.732470 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.745859 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.755322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snn4p\" (UniqueName: \"kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.755398 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.755419 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.857267 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snn4p\" (UniqueName: \"kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.857390 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.857421 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.857910 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.861726 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:53 crc kubenswrapper[4765]: I1203 21:04:53.887618 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snn4p\" (UniqueName: \"kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p\") pod \"redhat-operators-q9xwp\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:54 crc kubenswrapper[4765]: I1203 21:04:54.056656 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:04:54 crc kubenswrapper[4765]: I1203 21:04:54.861278 4765 generic.go:334] "Generic (PLEG): container finished" podID="9e0694eb-01f7-42d1-bb82-aa0c84f1df92" containerID="a586b908aa604624dcecc2ff285e3675cd688e91280e4baa231f10d4a41a04f9" exitCode=0 Dec 03 21:04:54 crc kubenswrapper[4765]: I1203 21:04:54.861499 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" event={"ID":"9e0694eb-01f7-42d1-bb82-aa0c84f1df92","Type":"ContainerDied","Data":"a586b908aa604624dcecc2ff285e3675cd688e91280e4baa231f10d4a41a04f9"} Dec 03 21:04:55 crc kubenswrapper[4765]: I1203 21:04:55.186108 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:04:55 crc kubenswrapper[4765]: W1203 21:04:55.216202 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fc32fd6_22b9_4549_a9fe_7f2182116958.slice/crio-a1eeae1b998db1458299ba935fdf592d55fb19663687d976480e76faf27bb057 WatchSource:0}: Error finding container a1eeae1b998db1458299ba935fdf592d55fb19663687d976480e76faf27bb057: Status 404 returned error can't find the container with id a1eeae1b998db1458299ba935fdf592d55fb19663687d976480e76faf27bb057 Dec 03 21:04:55 crc kubenswrapper[4765]: I1203 21:04:55.873856 4765 generic.go:334] "Generic (PLEG): container finished" podID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerID="849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675" exitCode=0 Dec 03 21:04:55 crc kubenswrapper[4765]: I1203 21:04:55.873954 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerDied","Data":"849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675"} Dec 03 21:04:55 crc kubenswrapper[4765]: I1203 21:04:55.874211 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerStarted","Data":"a1eeae1b998db1458299ba935fdf592d55fb19663687d976480e76faf27bb057"} Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.373173 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.549461 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dn6d\" (UniqueName: \"kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d\") pod \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.549872 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory\") pod \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.550055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key\") pod \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\" (UID: \"9e0694eb-01f7-42d1-bb82-aa0c84f1df92\") " Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.555738 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d" (OuterVolumeSpecName: "kube-api-access-2dn6d") pod "9e0694eb-01f7-42d1-bb82-aa0c84f1df92" (UID: "9e0694eb-01f7-42d1-bb82-aa0c84f1df92"). InnerVolumeSpecName "kube-api-access-2dn6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.586859 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e0694eb-01f7-42d1-bb82-aa0c84f1df92" (UID: "9e0694eb-01f7-42d1-bb82-aa0c84f1df92"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.593514 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory" (OuterVolumeSpecName: "inventory") pod "9e0694eb-01f7-42d1-bb82-aa0c84f1df92" (UID: "9e0694eb-01f7-42d1-bb82-aa0c84f1df92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.652470 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dn6d\" (UniqueName: \"kubernetes.io/projected/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-kube-api-access-2dn6d\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.652507 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.652517 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0694eb-01f7-42d1-bb82-aa0c84f1df92-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.887523 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerStarted","Data":"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d"} Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.891067 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" event={"ID":"9e0694eb-01f7-42d1-bb82-aa0c84f1df92","Type":"ContainerDied","Data":"64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f"} Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.891117 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64da09ff0df5fbc38e1abd1cf9de67bd8a57923e646773f3bb8f0c840729032f" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.891189 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.983172 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp"] Dec 03 21:04:56 crc kubenswrapper[4765]: E1203 21:04:56.983681 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0694eb-01f7-42d1-bb82-aa0c84f1df92" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.983708 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0694eb-01f7-42d1-bb82-aa0c84f1df92" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.983945 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0694eb-01f7-42d1-bb82-aa0c84f1df92" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.984737 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.987401 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.987689 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.988872 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.990852 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:04:56 crc kubenswrapper[4765]: I1203 21:04:56.991502 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp"] Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.159641 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.159718 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjq8\" (UniqueName: \"kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.159745 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.261459 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.261846 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjq8\" (UniqueName: \"kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.261882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.265441 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.265756 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.290557 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjq8\" (UniqueName: \"kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.311979 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.867180 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp"] Dec 03 21:04:57 crc kubenswrapper[4765]: W1203 21:04:57.872191 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode25cfdc8_0ddd_4d7d_bda7_914d65f31caf.slice/crio-424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007 WatchSource:0}: Error finding container 424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007: Status 404 returned error can't find the container with id 424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007 Dec 03 21:04:57 crc kubenswrapper[4765]: I1203 21:04:57.899424 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" event={"ID":"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf","Type":"ContainerStarted","Data":"424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007"} Dec 03 21:04:58 crc kubenswrapper[4765]: I1203 21:04:58.907175 4765 generic.go:334] "Generic (PLEG): container finished" podID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerID="4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d" exitCode=0 Dec 03 21:04:58 crc kubenswrapper[4765]: I1203 21:04:58.907409 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerDied","Data":"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d"} Dec 03 21:05:00 crc kubenswrapper[4765]: I1203 21:05:00.941377 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerStarted","Data":"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03"} Dec 03 21:05:00 crc kubenswrapper[4765]: I1203 21:05:00.944978 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" event={"ID":"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf","Type":"ContainerStarted","Data":"4f9590658e87331fde740636a35d5979133b02bd26f65e265d246f5028b9174c"} Dec 03 21:05:00 crc kubenswrapper[4765]: I1203 21:05:00.966842 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q9xwp" podStartSLOduration=3.892729101 podStartE2EDuration="7.966820794s" podCreationTimestamp="2025-12-03 21:04:53 +0000 UTC" firstStartedPulling="2025-12-03 21:04:55.875661031 +0000 UTC m=+1593.806206182" lastFinishedPulling="2025-12-03 21:04:59.949752684 +0000 UTC m=+1597.880297875" observedRunningTime="2025-12-03 21:05:00.963696229 +0000 UTC m=+1598.894241380" watchObservedRunningTime="2025-12-03 21:05:00.966820794 +0000 UTC m=+1598.897365945" Dec 03 21:05:01 crc kubenswrapper[4765]: I1203 21:05:01.000839 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" podStartSLOduration=4.561024515 podStartE2EDuration="5.000819916s" podCreationTimestamp="2025-12-03 21:04:56 +0000 UTC" firstStartedPulling="2025-12-03 21:04:57.875693808 +0000 UTC m=+1595.806238969" lastFinishedPulling="2025-12-03 21:04:58.315489209 +0000 UTC m=+1596.246034370" observedRunningTime="2025-12-03 21:05:00.987407378 +0000 UTC m=+1598.917952579" watchObservedRunningTime="2025-12-03 21:05:01.000819916 +0000 UTC m=+1598.931365077" Dec 03 21:05:04 crc kubenswrapper[4765]: I1203 21:05:04.056902 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:04 crc kubenswrapper[4765]: I1203 21:05:04.058665 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:05 crc kubenswrapper[4765]: I1203 21:05:05.112440 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q9xwp" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="registry-server" probeResult="failure" output=< Dec 03 21:05:05 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:05:05 crc kubenswrapper[4765]: > Dec 03 21:05:14 crc kubenswrapper[4765]: I1203 21:05:14.107337 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:14 crc kubenswrapper[4765]: I1203 21:05:14.165854 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:14 crc kubenswrapper[4765]: I1203 21:05:14.347573 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.102826 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q9xwp" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="registry-server" containerID="cri-o://f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03" gracePeriod=2 Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.645706 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.759786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content\") pod \"7fc32fd6-22b9-4549-a9fe-7f2182116958\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.759864 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snn4p\" (UniqueName: \"kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p\") pod \"7fc32fd6-22b9-4549-a9fe-7f2182116958\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.759990 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities\") pod \"7fc32fd6-22b9-4549-a9fe-7f2182116958\" (UID: \"7fc32fd6-22b9-4549-a9fe-7f2182116958\") " Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.761618 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities" (OuterVolumeSpecName: "utilities") pod "7fc32fd6-22b9-4549-a9fe-7f2182116958" (UID: "7fc32fd6-22b9-4549-a9fe-7f2182116958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.768337 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p" (OuterVolumeSpecName: "kube-api-access-snn4p") pod "7fc32fd6-22b9-4549-a9fe-7f2182116958" (UID: "7fc32fd6-22b9-4549-a9fe-7f2182116958"). InnerVolumeSpecName "kube-api-access-snn4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.859101 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fc32fd6-22b9-4549-a9fe-7f2182116958" (UID: "7fc32fd6-22b9-4549-a9fe-7f2182116958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.862068 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snn4p\" (UniqueName: \"kubernetes.io/projected/7fc32fd6-22b9-4549-a9fe-7f2182116958-kube-api-access-snn4p\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.862117 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:16 crc kubenswrapper[4765]: I1203 21:05:16.862129 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fc32fd6-22b9-4549-a9fe-7f2182116958-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.116277 4765 generic.go:334] "Generic (PLEG): container finished" podID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerID="f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03" exitCode=0 Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.116361 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerDied","Data":"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03"} Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.116398 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9xwp" event={"ID":"7fc32fd6-22b9-4549-a9fe-7f2182116958","Type":"ContainerDied","Data":"a1eeae1b998db1458299ba935fdf592d55fb19663687d976480e76faf27bb057"} Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.116426 4765 scope.go:117] "RemoveContainer" containerID="f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.116585 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9xwp" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.156867 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.157905 4765 scope.go:117] "RemoveContainer" containerID="4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.165118 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q9xwp"] Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.182555 4765 scope.go:117] "RemoveContainer" containerID="849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.229830 4765 scope.go:117] "RemoveContainer" containerID="f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03" Dec 03 21:05:17 crc kubenswrapper[4765]: E1203 21:05:17.230345 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03\": container with ID starting with f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03 not found: ID does not exist" containerID="f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.230428 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03"} err="failed to get container status \"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03\": rpc error: code = NotFound desc = could not find container \"f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03\": container with ID starting with f7fd566082a7653856a68187bb7c6ac47572f80dcf874d32a153fc5d1c9f0a03 not found: ID does not exist" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.230526 4765 scope.go:117] "RemoveContainer" containerID="4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d" Dec 03 21:05:17 crc kubenswrapper[4765]: E1203 21:05:17.230880 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d\": container with ID starting with 4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d not found: ID does not exist" containerID="4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.230963 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d"} err="failed to get container status \"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d\": rpc error: code = NotFound desc = could not find container \"4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d\": container with ID starting with 4677eb0138d4e13e86ee1549b9d7215c06aae33e0fc5adc9de94a26d01c0658d not found: ID does not exist" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.231026 4765 scope.go:117] "RemoveContainer" containerID="849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675" Dec 03 21:05:17 crc kubenswrapper[4765]: E1203 21:05:17.231326 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675\": container with ID starting with 849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675 not found: ID does not exist" containerID="849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675" Dec 03 21:05:17 crc kubenswrapper[4765]: I1203 21:05:17.231395 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675"} err="failed to get container status \"849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675\": rpc error: code = NotFound desc = could not find container \"849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675\": container with ID starting with 849827072b3909cd2022a18317207f68b3469aaf7068a41b9e7aa089324f3675 not found: ID does not exist" Dec 03 21:05:18 crc kubenswrapper[4765]: I1203 21:05:18.371506 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" path="/var/lib/kubelet/pods/7fc32fd6-22b9-4549-a9fe-7f2182116958/volumes" Dec 03 21:05:20 crc kubenswrapper[4765]: I1203 21:05:20.065201 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nzkps"] Dec 03 21:05:20 crc kubenswrapper[4765]: I1203 21:05:20.079470 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nzkps"] Dec 03 21:05:20 crc kubenswrapper[4765]: I1203 21:05:20.374106 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ecb436-7651-4a8f-a9b8-5f476df8161d" path="/var/lib/kubelet/pods/11ecb436-7651-4a8f-a9b8-5f476df8161d/volumes" Dec 03 21:05:24 crc kubenswrapper[4765]: I1203 21:05:24.798681 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:05:24 crc kubenswrapper[4765]: I1203 21:05:24.799265 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.049389 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rq8cg"] Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.063140 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f5tgv"] Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.072100 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f5tgv"] Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.080760 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rq8cg"] Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.938257 4765 scope.go:117] "RemoveContainer" containerID="82294af1e4bc09118963d288c16142f5cd2c6b83426c89c7224b8452b2b0b156" Dec 03 21:05:27 crc kubenswrapper[4765]: I1203 21:05:27.971432 4765 scope.go:117] "RemoveContainer" containerID="f16d0ceeabe8a5de605482a02b50f040d50487ad9861e9d0da2e7f8aedb8158b" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.006348 4765 scope.go:117] "RemoveContainer" containerID="462f835b9cddd802be5c5430dda21dc715fe276aa6fbf56cce397ee722285538" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.055010 4765 scope.go:117] "RemoveContainer" containerID="bbb4301f73f9741dd5626b5cf895e1ff79ce5b87f99e9d744fd78ee04f4370a2" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.085703 4765 scope.go:117] "RemoveContainer" containerID="0aeb255f24d371cab29cd89740393ce2ad83c35782c8451c1b9e57f144d0dbc6" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.127284 4765 scope.go:117] "RemoveContainer" containerID="5df27bc840b213b034ef25622938ea039a34314b244aaed852501387c7208792" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.183222 4765 scope.go:117] "RemoveContainer" containerID="611b9785371b44b85c560895feeb1f2c12fdf2c3acb831c8747dee772c66ab0b" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.203810 4765 scope.go:117] "RemoveContainer" containerID="af9f702769e504f8ea84611b310d1cfb43aaec53772a19274b794eb495114010" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.221185 4765 scope.go:117] "RemoveContainer" containerID="533977e086adcb5d40ca46b8538d89a7757cdf761f5f88474e1370b0f85cf5bc" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.374270 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9150bf8-239d-4d51-bc11-81e118eb19f1" path="/var/lib/kubelet/pods/b9150bf8-239d-4d51-bc11-81e118eb19f1/volumes" Dec 03 21:05:28 crc kubenswrapper[4765]: I1203 21:05:28.375378 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c617f0ad-a3ec-4407-a4bf-494c3a362a48" path="/var/lib/kubelet/pods/c617f0ad-a3ec-4407-a4bf-494c3a362a48/volumes" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.501121 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:30 crc kubenswrapper[4765]: E1203 21:05:30.502931 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="extract-content" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.502956 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="extract-content" Dec 03 21:05:30 crc kubenswrapper[4765]: E1203 21:05:30.503089 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="extract-utilities" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.503155 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="extract-utilities" Dec 03 21:05:30 crc kubenswrapper[4765]: E1203 21:05:30.503180 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="registry-server" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.503201 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="registry-server" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.506006 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc32fd6-22b9-4549-a9fe-7f2182116958" containerName="registry-server" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.507928 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.517537 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.629096 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc798\" (UniqueName: \"kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.629171 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.629212 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.730447 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc798\" (UniqueName: \"kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.730537 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.730587 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.731276 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.731290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.754949 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc798\" (UniqueName: \"kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798\") pod \"certified-operators-c7m4w\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:30 crc kubenswrapper[4765]: I1203 21:05:30.834900 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:31 crc kubenswrapper[4765]: I1203 21:05:31.345504 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:32 crc kubenswrapper[4765]: I1203 21:05:32.304764 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerID="9c665e602d38d150079988fde114d92fbf3b5b0c7d4a5c5d59343568180c2f66" exitCode=0 Dec 03 21:05:32 crc kubenswrapper[4765]: I1203 21:05:32.304846 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerDied","Data":"9c665e602d38d150079988fde114d92fbf3b5b0c7d4a5c5d59343568180c2f66"} Dec 03 21:05:32 crc kubenswrapper[4765]: I1203 21:05:32.305397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerStarted","Data":"9b6573a2846743e5be9ea41cf63299ab12cc2a9954dd6d87e2ddcbf91d0e3091"} Dec 03 21:05:33 crc kubenswrapper[4765]: I1203 21:05:33.316215 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerID="262b3f2e2a705955fe887e639dec722851ba083bf0425980b05f0b3c4323a37a" exitCode=0 Dec 03 21:05:33 crc kubenswrapper[4765]: I1203 21:05:33.316273 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerDied","Data":"262b3f2e2a705955fe887e639dec722851ba083bf0425980b05f0b3c4323a37a"} Dec 03 21:05:34 crc kubenswrapper[4765]: I1203 21:05:34.045283 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7slsb"] Dec 03 21:05:34 crc kubenswrapper[4765]: I1203 21:05:34.055350 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7slsb"] Dec 03 21:05:34 crc kubenswrapper[4765]: I1203 21:05:34.336826 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerStarted","Data":"f3a03b2731a3e9cfda9dae1da8f434cd410acbb9813f9cd5169e1db5cbdc23ab"} Dec 03 21:05:34 crc kubenswrapper[4765]: I1203 21:05:34.361957 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c7m4w" podStartSLOduration=2.728848213 podStartE2EDuration="4.361942375s" podCreationTimestamp="2025-12-03 21:05:30 +0000 UTC" firstStartedPulling="2025-12-03 21:05:32.307695522 +0000 UTC m=+1630.238240713" lastFinishedPulling="2025-12-03 21:05:33.940789714 +0000 UTC m=+1631.871334875" observedRunningTime="2025-12-03 21:05:34.360559057 +0000 UTC m=+1632.291104208" watchObservedRunningTime="2025-12-03 21:05:34.361942375 +0000 UTC m=+1632.292487526" Dec 03 21:05:34 crc kubenswrapper[4765]: I1203 21:05:34.377515 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3f8651-39c3-450e-9da1-06ad1dc357a7" path="/var/lib/kubelet/pods/2c3f8651-39c3-450e-9da1-06ad1dc357a7/volumes" Dec 03 21:05:37 crc kubenswrapper[4765]: I1203 21:05:37.048141 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tnnwc"] Dec 03 21:05:37 crc kubenswrapper[4765]: I1203 21:05:37.061162 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tnnwc"] Dec 03 21:05:38 crc kubenswrapper[4765]: I1203 21:05:38.379278 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eaeb80c-f6b8-48bb-80a4-3f43623cfc13" path="/var/lib/kubelet/pods/5eaeb80c-f6b8-48bb-80a4-3f43623cfc13/volumes" Dec 03 21:05:40 crc kubenswrapper[4765]: I1203 21:05:40.835088 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:40 crc kubenswrapper[4765]: I1203 21:05:40.835553 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:40 crc kubenswrapper[4765]: I1203 21:05:40.886169 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:41 crc kubenswrapper[4765]: I1203 21:05:41.482733 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:41 crc kubenswrapper[4765]: I1203 21:05:41.535456 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:43 crc kubenswrapper[4765]: I1203 21:05:43.443790 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c7m4w" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="registry-server" containerID="cri-o://f3a03b2731a3e9cfda9dae1da8f434cd410acbb9813f9cd5169e1db5cbdc23ab" gracePeriod=2 Dec 03 21:05:44 crc kubenswrapper[4765]: I1203 21:05:44.454159 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerID="f3a03b2731a3e9cfda9dae1da8f434cd410acbb9813f9cd5169e1db5cbdc23ab" exitCode=0 Dec 03 21:05:44 crc kubenswrapper[4765]: I1203 21:05:44.454203 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerDied","Data":"f3a03b2731a3e9cfda9dae1da8f434cd410acbb9813f9cd5169e1db5cbdc23ab"} Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.016751 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.128145 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc798\" (UniqueName: \"kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798\") pod \"cf586c65-44ab-4098-980f-2fe2f0a498cf\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.128400 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities\") pod \"cf586c65-44ab-4098-980f-2fe2f0a498cf\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.128455 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content\") pod \"cf586c65-44ab-4098-980f-2fe2f0a498cf\" (UID: \"cf586c65-44ab-4098-980f-2fe2f0a498cf\") " Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.129471 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities" (OuterVolumeSpecName: "utilities") pod "cf586c65-44ab-4098-980f-2fe2f0a498cf" (UID: "cf586c65-44ab-4098-980f-2fe2f0a498cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.136788 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798" (OuterVolumeSpecName: "kube-api-access-zc798") pod "cf586c65-44ab-4098-980f-2fe2f0a498cf" (UID: "cf586c65-44ab-4098-980f-2fe2f0a498cf"). InnerVolumeSpecName "kube-api-access-zc798". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.187968 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf586c65-44ab-4098-980f-2fe2f0a498cf" (UID: "cf586c65-44ab-4098-980f-2fe2f0a498cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.231147 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.231184 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf586c65-44ab-4098-980f-2fe2f0a498cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.231198 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc798\" (UniqueName: \"kubernetes.io/projected/cf586c65-44ab-4098-980f-2fe2f0a498cf-kube-api-access-zc798\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.468622 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m4w" event={"ID":"cf586c65-44ab-4098-980f-2fe2f0a498cf","Type":"ContainerDied","Data":"9b6573a2846743e5be9ea41cf63299ab12cc2a9954dd6d87e2ddcbf91d0e3091"} Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.468708 4765 scope.go:117] "RemoveContainer" containerID="f3a03b2731a3e9cfda9dae1da8f434cd410acbb9813f9cd5169e1db5cbdc23ab" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.468785 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m4w" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.489475 4765 scope.go:117] "RemoveContainer" containerID="262b3f2e2a705955fe887e639dec722851ba083bf0425980b05f0b3c4323a37a" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.513634 4765 scope.go:117] "RemoveContainer" containerID="9c665e602d38d150079988fde114d92fbf3b5b0c7d4a5c5d59343568180c2f66" Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.518998 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:45 crc kubenswrapper[4765]: I1203 21:05:45.534084 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c7m4w"] Dec 03 21:05:46 crc kubenswrapper[4765]: I1203 21:05:46.372936 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" path="/var/lib/kubelet/pods/cf586c65-44ab-4098-980f-2fe2f0a498cf/volumes" Dec 03 21:05:54 crc kubenswrapper[4765]: I1203 21:05:54.564816 4765 generic.go:334] "Generic (PLEG): container finished" podID="e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" containerID="4f9590658e87331fde740636a35d5979133b02bd26f65e265d246f5028b9174c" exitCode=0 Dec 03 21:05:54 crc kubenswrapper[4765]: I1203 21:05:54.564914 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" event={"ID":"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf","Type":"ContainerDied","Data":"4f9590658e87331fde740636a35d5979133b02bd26f65e265d246f5028b9174c"} Dec 03 21:05:54 crc kubenswrapper[4765]: I1203 21:05:54.798586 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:05:54 crc kubenswrapper[4765]: I1203 21:05:54.798671 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:05:55 crc kubenswrapper[4765]: I1203 21:05:55.974694 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.059472 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory\") pod \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.059612 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgjq8\" (UniqueName: \"kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8\") pod \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.059671 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key\") pod \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\" (UID: \"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf\") " Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.068653 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8" (OuterVolumeSpecName: "kube-api-access-wgjq8") pod "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" (UID: "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf"). InnerVolumeSpecName "kube-api-access-wgjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.090115 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" (UID: "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.099464 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory" (OuterVolumeSpecName: "inventory") pod "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" (UID: "e25cfdc8-0ddd-4d7d-bda7-914d65f31caf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.161613 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.161924 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgjq8\" (UniqueName: \"kubernetes.io/projected/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-kube-api-access-wgjq8\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.161936 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.588450 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" event={"ID":"e25cfdc8-0ddd-4d7d-bda7-914d65f31caf","Type":"ContainerDied","Data":"424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007"} Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.588545 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424f79e2335cd459ee214e794208e036f768612310b90c3b61663195ab6ae007" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.588601 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.690225 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wnqgb"] Dec 03 21:05:56 crc kubenswrapper[4765]: E1203 21:05:56.690672 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="extract-content" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.690689 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="extract-content" Dec 03 21:05:56 crc kubenswrapper[4765]: E1203 21:05:56.690708 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.690719 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:05:56 crc kubenswrapper[4765]: E1203 21:05:56.690737 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="registry-server" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.690745 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="registry-server" Dec 03 21:05:56 crc kubenswrapper[4765]: E1203 21:05:56.690761 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="extract-utilities" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.690769 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="extract-utilities" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.691030 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.691053 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf586c65-44ab-4098-980f-2fe2f0a498cf" containerName="registry-server" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.691750 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.695814 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.696182 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.696382 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.697823 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.713927 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wnqgb"] Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.873682 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.874174 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5l2\" (UniqueName: \"kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.874267 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.976148 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.976412 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5l2\" (UniqueName: \"kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.976473 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.980901 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:56 crc kubenswrapper[4765]: I1203 21:05:56.982728 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:57 crc kubenswrapper[4765]: I1203 21:05:57.001630 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5l2\" (UniqueName: \"kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2\") pod \"ssh-known-hosts-edpm-deployment-wnqgb\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:57 crc kubenswrapper[4765]: I1203 21:05:57.012348 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:05:57 crc kubenswrapper[4765]: I1203 21:05:57.358267 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wnqgb"] Dec 03 21:05:57 crc kubenswrapper[4765]: I1203 21:05:57.603268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" event={"ID":"6f554675-9b8f-47ef-88d2-21532b35bf7e","Type":"ContainerStarted","Data":"e7cd7362803ab7e966f9234e54d6621d1a9bef89910e7db289900e8fbfad6172"} Dec 03 21:05:58 crc kubenswrapper[4765]: I1203 21:05:58.621270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" event={"ID":"6f554675-9b8f-47ef-88d2-21532b35bf7e","Type":"ContainerStarted","Data":"2d85f2a30d09624e37d57e37c48f052558099a3b0fd37a98c0b9247808ac290c"} Dec 03 21:05:58 crc kubenswrapper[4765]: I1203 21:05:58.643409 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" podStartSLOduration=2.231309984 podStartE2EDuration="2.643390156s" podCreationTimestamp="2025-12-03 21:05:56 +0000 UTC" firstStartedPulling="2025-12-03 21:05:57.361990473 +0000 UTC m=+1655.292535644" lastFinishedPulling="2025-12-03 21:05:57.774070655 +0000 UTC m=+1655.704615816" observedRunningTime="2025-12-03 21:05:58.641862905 +0000 UTC m=+1656.572408056" watchObservedRunningTime="2025-12-03 21:05:58.643390156 +0000 UTC m=+1656.573935327" Dec 03 21:06:05 crc kubenswrapper[4765]: I1203 21:06:05.722541 4765 generic.go:334] "Generic (PLEG): container finished" podID="6f554675-9b8f-47ef-88d2-21532b35bf7e" containerID="2d85f2a30d09624e37d57e37c48f052558099a3b0fd37a98c0b9247808ac290c" exitCode=0 Dec 03 21:06:05 crc kubenswrapper[4765]: I1203 21:06:05.722674 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" event={"ID":"6f554675-9b8f-47ef-88d2-21532b35bf7e","Type":"ContainerDied","Data":"2d85f2a30d09624e37d57e37c48f052558099a3b0fd37a98c0b9247808ac290c"} Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.181163 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.285255 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam\") pod \"6f554675-9b8f-47ef-88d2-21532b35bf7e\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.285343 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0\") pod \"6f554675-9b8f-47ef-88d2-21532b35bf7e\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.285440 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5l2\" (UniqueName: \"kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2\") pod \"6f554675-9b8f-47ef-88d2-21532b35bf7e\" (UID: \"6f554675-9b8f-47ef-88d2-21532b35bf7e\") " Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.291694 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2" (OuterVolumeSpecName: "kube-api-access-7l5l2") pod "6f554675-9b8f-47ef-88d2-21532b35bf7e" (UID: "6f554675-9b8f-47ef-88d2-21532b35bf7e"). InnerVolumeSpecName "kube-api-access-7l5l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.320499 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6f554675-9b8f-47ef-88d2-21532b35bf7e" (UID: "6f554675-9b8f-47ef-88d2-21532b35bf7e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.336028 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f554675-9b8f-47ef-88d2-21532b35bf7e" (UID: "6f554675-9b8f-47ef-88d2-21532b35bf7e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.387759 4765 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.387811 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5l2\" (UniqueName: \"kubernetes.io/projected/6f554675-9b8f-47ef-88d2-21532b35bf7e-kube-api-access-7l5l2\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.387841 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f554675-9b8f-47ef-88d2-21532b35bf7e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.744860 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" event={"ID":"6f554675-9b8f-47ef-88d2-21532b35bf7e","Type":"ContainerDied","Data":"e7cd7362803ab7e966f9234e54d6621d1a9bef89910e7db289900e8fbfad6172"} Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.744902 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7cd7362803ab7e966f9234e54d6621d1a9bef89910e7db289900e8fbfad6172" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.744964 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wnqgb" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.848422 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf"] Dec 03 21:06:07 crc kubenswrapper[4765]: E1203 21:06:07.848798 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f554675-9b8f-47ef-88d2-21532b35bf7e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.848814 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f554675-9b8f-47ef-88d2-21532b35bf7e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.848973 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f554675-9b8f-47ef-88d2-21532b35bf7e" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.849606 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.851850 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.852164 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.852465 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.852909 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.882564 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf"] Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.998866 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.999474 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:07 crc kubenswrapper[4765]: I1203 21:06:07.999762 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp57l\" (UniqueName: \"kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.049808 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qrslv"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.061045 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mgkxp"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.072831 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3373-account-create-update-zcll2"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.082612 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qk68p"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.094237 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qrslv"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.117859 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.117984 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp57l\" (UniqueName: \"kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.118110 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.119015 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mgkxp"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.124364 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.124980 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.132355 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3373-account-create-update-zcll2"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.141043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp57l\" (UniqueName: \"kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mdjxf\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.141193 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qk68p"] Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.172559 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.374030 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2f84ce-d56b-46e8-bcc4-0b65034e87a1" path="/var/lib/kubelet/pods/0b2f84ce-d56b-46e8-bcc4-0b65034e87a1/volumes" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.375566 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977d6049-5c13-47b0-8a4e-c62fea3cd6d7" path="/var/lib/kubelet/pods/977d6049-5c13-47b0-8a4e-c62fea3cd6d7/volumes" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.376096 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f578a1fd-499c-4610-af9f-3e8ad5555749" path="/var/lib/kubelet/pods/f578a1fd-499c-4610-af9f-3e8ad5555749/volumes" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.376625 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d90462-f312-4c5f-a20d-092d517b41e0" path="/var/lib/kubelet/pods/f6d90462-f312-4c5f-a20d-092d517b41e0/volumes" Dec 03 21:06:08 crc kubenswrapper[4765]: I1203 21:06:08.743273 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf"] Dec 03 21:06:08 crc kubenswrapper[4765]: W1203 21:06:08.751916 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcc376d4_2492_4695_882b_270070bcd17a.slice/crio-7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d WatchSource:0}: Error finding container 7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d: Status 404 returned error can't find the container with id 7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d Dec 03 21:06:09 crc kubenswrapper[4765]: I1203 21:06:09.043262 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b283-account-create-update-jdflm"] Dec 03 21:06:09 crc kubenswrapper[4765]: I1203 21:06:09.052364 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e997-account-create-update-m6fht"] Dec 03 21:06:09 crc kubenswrapper[4765]: I1203 21:06:09.065915 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b283-account-create-update-jdflm"] Dec 03 21:06:09 crc kubenswrapper[4765]: I1203 21:06:09.077610 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e997-account-create-update-m6fht"] Dec 03 21:06:09 crc kubenswrapper[4765]: I1203 21:06:09.763394 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" event={"ID":"fcc376d4-2492-4695-882b-270070bcd17a","Type":"ContainerStarted","Data":"7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d"} Dec 03 21:06:10 crc kubenswrapper[4765]: I1203 21:06:10.374950 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9b4f82-dda0-446d-97c5-b94c26326298" path="/var/lib/kubelet/pods/6a9b4f82-dda0-446d-97c5-b94c26326298/volumes" Dec 03 21:06:10 crc kubenswrapper[4765]: I1203 21:06:10.376714 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbb6896-f557-424f-871f-2f5df6968bd6" path="/var/lib/kubelet/pods/bcbb6896-f557-424f-871f-2f5df6968bd6/volumes" Dec 03 21:06:10 crc kubenswrapper[4765]: I1203 21:06:10.778562 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" event={"ID":"fcc376d4-2492-4695-882b-270070bcd17a","Type":"ContainerStarted","Data":"e8e15e8613a6d9429f82e232ea9c52d518813ac0458fba5384d975aecb96bec6"} Dec 03 21:06:19 crc kubenswrapper[4765]: I1203 21:06:19.877799 4765 generic.go:334] "Generic (PLEG): container finished" podID="fcc376d4-2492-4695-882b-270070bcd17a" containerID="e8e15e8613a6d9429f82e232ea9c52d518813ac0458fba5384d975aecb96bec6" exitCode=0 Dec 03 21:06:19 crc kubenswrapper[4765]: I1203 21:06:19.877856 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" event={"ID":"fcc376d4-2492-4695-882b-270070bcd17a","Type":"ContainerDied","Data":"e8e15e8613a6d9429f82e232ea9c52d518813ac0458fba5384d975aecb96bec6"} Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.390149 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.577199 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp57l\" (UniqueName: \"kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l\") pod \"fcc376d4-2492-4695-882b-270070bcd17a\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.577816 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key\") pod \"fcc376d4-2492-4695-882b-270070bcd17a\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.577971 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory\") pod \"fcc376d4-2492-4695-882b-270070bcd17a\" (UID: \"fcc376d4-2492-4695-882b-270070bcd17a\") " Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.587744 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l" (OuterVolumeSpecName: "kube-api-access-bp57l") pod "fcc376d4-2492-4695-882b-270070bcd17a" (UID: "fcc376d4-2492-4695-882b-270070bcd17a"). InnerVolumeSpecName "kube-api-access-bp57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.626042 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fcc376d4-2492-4695-882b-270070bcd17a" (UID: "fcc376d4-2492-4695-882b-270070bcd17a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.628569 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory" (OuterVolumeSpecName: "inventory") pod "fcc376d4-2492-4695-882b-270070bcd17a" (UID: "fcc376d4-2492-4695-882b-270070bcd17a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.697879 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.697940 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc376d4-2492-4695-882b-270070bcd17a-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.697975 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp57l\" (UniqueName: \"kubernetes.io/projected/fcc376d4-2492-4695-882b-270070bcd17a-kube-api-access-bp57l\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.904014 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" event={"ID":"fcc376d4-2492-4695-882b-270070bcd17a","Type":"ContainerDied","Data":"7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d"} Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.904080 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f24a545ace1ff6c60552c2ad896c6e33d1965de6b5caa651a4a0fe33cc8a23d" Dec 03 21:06:21 crc kubenswrapper[4765]: I1203 21:06:21.904094 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.002661 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct"] Dec 03 21:06:22 crc kubenswrapper[4765]: E1203 21:06:22.003075 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc376d4-2492-4695-882b-270070bcd17a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.003096 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc376d4-2492-4695-882b-270070bcd17a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.003275 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc376d4-2492-4695-882b-270070bcd17a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.003933 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.006823 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.007507 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.007509 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.008802 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.025910 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct"] Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.206427 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.207119 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.207419 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5qf\" (UniqueName: \"kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.309514 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.309591 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.309643 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc5qf\" (UniqueName: \"kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.315991 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.323932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.341767 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc5qf\" (UniqueName: \"kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.624695 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:06:22 crc kubenswrapper[4765]: I1203 21:06:22.631637 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:23 crc kubenswrapper[4765]: I1203 21:06:23.015226 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct"] Dec 03 21:06:23 crc kubenswrapper[4765]: W1203 21:06:23.018521 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea4e33c_e6fc_46e9_9fb8_639e43bd000b.slice/crio-29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af WatchSource:0}: Error finding container 29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af: Status 404 returned error can't find the container with id 29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af Dec 03 21:06:23 crc kubenswrapper[4765]: I1203 21:06:23.466738 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:06:23 crc kubenswrapper[4765]: I1203 21:06:23.925738 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" event={"ID":"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b","Type":"ContainerStarted","Data":"4181cab5c2737767eaa48a54aafd4037fc169ecfb8cda7fec6d498c8f6e3a204"} Dec 03 21:06:23 crc kubenswrapper[4765]: I1203 21:06:23.926300 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" event={"ID":"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b","Type":"ContainerStarted","Data":"29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af"} Dec 03 21:06:23 crc kubenswrapper[4765]: I1203 21:06:23.950055 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" podStartSLOduration=2.5074770280000003 podStartE2EDuration="2.950028498s" podCreationTimestamp="2025-12-03 21:06:21 +0000 UTC" firstStartedPulling="2025-12-03 21:06:23.021696095 +0000 UTC m=+1680.952241286" lastFinishedPulling="2025-12-03 21:06:23.464247595 +0000 UTC m=+1681.394792756" observedRunningTime="2025-12-03 21:06:23.944935312 +0000 UTC m=+1681.875480483" watchObservedRunningTime="2025-12-03 21:06:23.950028498 +0000 UTC m=+1681.880573689" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.798239 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.798366 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.798443 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.799376 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.799472 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" gracePeriod=600 Dec 03 21:06:24 crc kubenswrapper[4765]: E1203 21:06:24.930544 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.943370 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" exitCode=0 Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.943452 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae"} Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.943551 4765 scope.go:117] "RemoveContainer" containerID="d72382d303db1a66d05ac874469c7186f77b9d02304b84ce9b1323dcea340ec0" Dec 03 21:06:24 crc kubenswrapper[4765]: I1203 21:06:24.944302 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:06:24 crc kubenswrapper[4765]: E1203 21:06:24.944716 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.433752 4765 scope.go:117] "RemoveContainer" containerID="c4163dafb664f6d6c108a64fc28d978da5dbf70b293ab5ca7cb7241f9f21c012" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.485097 4765 scope.go:117] "RemoveContainer" containerID="21db9182694416f5ed515caa7a2995e2ac8decae61b121dbe8b19d7ab15df1ef" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.514279 4765 scope.go:117] "RemoveContainer" containerID="f3600cd37fa54b671bbd74790d8168820ebe8ca4eb3cd878fb18a40af2066af7" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.564648 4765 scope.go:117] "RemoveContainer" containerID="75c829f86ea1e72d6db4b3b1fb3171b69f34e2fa86fee5ec0303c4c1de4380d6" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.603189 4765 scope.go:117] "RemoveContainer" containerID="e586e0d7cbce1ab355b8717dfdf1800859ceb72540fc34dd4cb5b10433155232" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.674577 4765 scope.go:117] "RemoveContainer" containerID="ea01e06e5aa1444e9af260d40003e11f7e2fadda9cc92556c1e97718ec053c77" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.702770 4765 scope.go:117] "RemoveContainer" containerID="67f7ed176fc4e23352446441719955da13f6cdc2c61d8f4174e64ed5b8d83666" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.721873 4765 scope.go:117] "RemoveContainer" containerID="9f19e720b40348d1faa22d92bd1650cecf0e914329cb880f7dbb4bd47d3a9cb9" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.754213 4765 scope.go:117] "RemoveContainer" containerID="3d94ce243bfd1ef677a4258fad3c3e0be1b85c3a5d6209a039176b75a85c2927" Dec 03 21:06:28 crc kubenswrapper[4765]: I1203 21:06:28.780781 4765 scope.go:117] "RemoveContainer" containerID="719a25f5480a5677e97fbecac2bb6abee5c94e127a294cc5763ff7b08afe19be" Dec 03 21:06:34 crc kubenswrapper[4765]: I1203 21:06:34.041062 4765 generic.go:334] "Generic (PLEG): container finished" podID="4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" containerID="4181cab5c2737767eaa48a54aafd4037fc169ecfb8cda7fec6d498c8f6e3a204" exitCode=0 Dec 03 21:06:34 crc kubenswrapper[4765]: I1203 21:06:34.041173 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" event={"ID":"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b","Type":"ContainerDied","Data":"4181cab5c2737767eaa48a54aafd4037fc169ecfb8cda7fec6d498c8f6e3a204"} Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.359954 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:06:35 crc kubenswrapper[4765]: E1203 21:06:35.360571 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.560817 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.716156 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc5qf\" (UniqueName: \"kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf\") pod \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.716234 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory\") pod \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.716276 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key\") pod \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\" (UID: \"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b\") " Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.721508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf" (OuterVolumeSpecName: "kube-api-access-hc5qf") pod "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" (UID: "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b"). InnerVolumeSpecName "kube-api-access-hc5qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.739625 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory" (OuterVolumeSpecName: "inventory") pod "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" (UID: "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.742404 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" (UID: "4ea4e33c-e6fc-46e9-9fb8-639e43bd000b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.818744 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc5qf\" (UniqueName: \"kubernetes.io/projected/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-kube-api-access-hc5qf\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.818789 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:35 crc kubenswrapper[4765]: I1203 21:06:35.818804 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:06:36 crc kubenswrapper[4765]: I1203 21:06:36.072746 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" event={"ID":"4ea4e33c-e6fc-46e9-9fb8-639e43bd000b","Type":"ContainerDied","Data":"29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af"} Dec 03 21:06:36 crc kubenswrapper[4765]: I1203 21:06:36.072808 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29da7a7b8bc54d4d9687fbd6d987356ae1f11154a7bb47c5780b635f28fe11af" Dec 03 21:06:36 crc kubenswrapper[4765]: I1203 21:06:36.072828 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct" Dec 03 21:06:40 crc kubenswrapper[4765]: I1203 21:06:40.076527 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-llggj"] Dec 03 21:06:40 crc kubenswrapper[4765]: I1203 21:06:40.089035 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-llggj"] Dec 03 21:06:40 crc kubenswrapper[4765]: I1203 21:06:40.373073 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6e2c85-155f-4b33-b3c7-fb8984ebab25" path="/var/lib/kubelet/pods/8b6e2c85-155f-4b33-b3c7-fb8984ebab25/volumes" Dec 03 21:06:48 crc kubenswrapper[4765]: I1203 21:06:48.360279 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:06:48 crc kubenswrapper[4765]: E1203 21:06:48.361094 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:07:02 crc kubenswrapper[4765]: I1203 21:07:02.070720 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rtsrx"] Dec 03 21:07:02 crc kubenswrapper[4765]: I1203 21:07:02.087325 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rtsrx"] Dec 03 21:07:02 crc kubenswrapper[4765]: I1203 21:07:02.374644 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:07:02 crc kubenswrapper[4765]: E1203 21:07:02.375117 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:07:02 crc kubenswrapper[4765]: I1203 21:07:02.379283 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de4d2e6-ab91-4e44-a83d-ba6f9f384be0" path="/var/lib/kubelet/pods/1de4d2e6-ab91-4e44-a83d-ba6f9f384be0/volumes" Dec 03 21:07:03 crc kubenswrapper[4765]: I1203 21:07:03.031089 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg8mp"] Dec 03 21:07:03 crc kubenswrapper[4765]: I1203 21:07:03.043119 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg8mp"] Dec 03 21:07:04 crc kubenswrapper[4765]: I1203 21:07:04.400755 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5979c42-e1df-4dd8-af4d-7e02e309bcc0" path="/var/lib/kubelet/pods/e5979c42-e1df-4dd8-af4d-7e02e309bcc0/volumes" Dec 03 21:07:13 crc kubenswrapper[4765]: I1203 21:07:13.360016 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:07:13 crc kubenswrapper[4765]: E1203 21:07:13.362671 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:07:28 crc kubenswrapper[4765]: I1203 21:07:28.361355 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:07:28 crc kubenswrapper[4765]: E1203 21:07:28.364785 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:07:29 crc kubenswrapper[4765]: I1203 21:07:29.043106 4765 scope.go:117] "RemoveContainer" containerID="0775648bb788239c52f7c759bd63ac9471f09976758422b8390bfb0ef802681c" Dec 03 21:07:29 crc kubenswrapper[4765]: I1203 21:07:29.099135 4765 scope.go:117] "RemoveContainer" containerID="9954a5b1d8cdf831d69d05815ae849574bdfe30e8da9844a453861af2c2f2eb9" Dec 03 21:07:29 crc kubenswrapper[4765]: I1203 21:07:29.167826 4765 scope.go:117] "RemoveContainer" containerID="440aabab35b3e1374dabf31e92da5c1fab2b7bb5aef36c753ac613f36e1d7b9f" Dec 03 21:07:40 crc kubenswrapper[4765]: I1203 21:07:40.360010 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:07:40 crc kubenswrapper[4765]: E1203 21:07:40.360692 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:07:47 crc kubenswrapper[4765]: I1203 21:07:47.062694 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5wxp"] Dec 03 21:07:47 crc kubenswrapper[4765]: I1203 21:07:47.071988 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5wxp"] Dec 03 21:07:48 crc kubenswrapper[4765]: I1203 21:07:48.369810 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529d6306-abe5-44f9-8804-1f374c25cadd" path="/var/lib/kubelet/pods/529d6306-abe5-44f9-8804-1f374c25cadd/volumes" Dec 03 21:07:52 crc kubenswrapper[4765]: I1203 21:07:52.370804 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:07:52 crc kubenswrapper[4765]: E1203 21:07:52.371628 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:08:05 crc kubenswrapper[4765]: I1203 21:08:05.360922 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:08:05 crc kubenswrapper[4765]: E1203 21:08:05.361855 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:08:16 crc kubenswrapper[4765]: I1203 21:08:16.360017 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:08:16 crc kubenswrapper[4765]: E1203 21:08:16.360922 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:08:29 crc kubenswrapper[4765]: I1203 21:08:29.267591 4765 scope.go:117] "RemoveContainer" containerID="f62f80ad37ec5176f76eba14f823fbe58ecaa4c36e50e2accae28fc58b07d95f" Dec 03 21:08:29 crc kubenswrapper[4765]: I1203 21:08:29.360258 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:08:29 crc kubenswrapper[4765]: E1203 21:08:29.360581 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:08:41 crc kubenswrapper[4765]: I1203 21:08:41.360390 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:08:41 crc kubenswrapper[4765]: E1203 21:08:41.361594 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:08:53 crc kubenswrapper[4765]: I1203 21:08:53.359749 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:08:53 crc kubenswrapper[4765]: E1203 21:08:53.360513 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:09:06 crc kubenswrapper[4765]: I1203 21:09:06.359690 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:09:06 crc kubenswrapper[4765]: E1203 21:09:06.360443 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:09:18 crc kubenswrapper[4765]: I1203 21:09:18.360429 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:09:18 crc kubenswrapper[4765]: E1203 21:09:18.361400 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:09:32 crc kubenswrapper[4765]: I1203 21:09:32.371929 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:09:32 crc kubenswrapper[4765]: E1203 21:09:32.373049 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.929874 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llbzl"] Dec 03 21:09:38 crc kubenswrapper[4765]: E1203 21:09:38.930975 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.930993 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.931221 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.932759 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.955400 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbzl"] Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.966899 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-utilities\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.967218 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-catalog-content\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:38 crc kubenswrapper[4765]: I1203 21:09:38.967369 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdts\" (UniqueName: \"kubernetes.io/projected/093c506d-ed96-47c2-8e8d-c499d82381e5-kube-api-access-qtdts\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.069495 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-utilities\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.069613 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-catalog-content\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.069659 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtdts\" (UniqueName: \"kubernetes.io/projected/093c506d-ed96-47c2-8e8d-c499d82381e5-kube-api-access-qtdts\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.070226 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-utilities\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.070372 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/093c506d-ed96-47c2-8e8d-c499d82381e5-catalog-content\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.090473 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtdts\" (UniqueName: \"kubernetes.io/projected/093c506d-ed96-47c2-8e8d-c499d82381e5-kube-api-access-qtdts\") pod \"community-operators-llbzl\" (UID: \"093c506d-ed96-47c2-8e8d-c499d82381e5\") " pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.266958 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.666671 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbzl"] Dec 03 21:09:39 crc kubenswrapper[4765]: W1203 21:09:39.680254 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod093c506d_ed96_47c2_8e8d_c499d82381e5.slice/crio-a224ff518f1c7aa5347840991c494ce797e772f9a43b903a47241432a9b9c23a WatchSource:0}: Error finding container a224ff518f1c7aa5347840991c494ce797e772f9a43b903a47241432a9b9c23a: Status 404 returned error can't find the container with id a224ff518f1c7aa5347840991c494ce797e772f9a43b903a47241432a9b9c23a Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.975125 4765 generic.go:334] "Generic (PLEG): container finished" podID="093c506d-ed96-47c2-8e8d-c499d82381e5" containerID="e6eebc7967daf49eb1c8fd88f998d6a485fff2efa8a81f3411ae4222a54b7dbe" exitCode=0 Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.975169 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbzl" event={"ID":"093c506d-ed96-47c2-8e8d-c499d82381e5","Type":"ContainerDied","Data":"e6eebc7967daf49eb1c8fd88f998d6a485fff2efa8a81f3411ae4222a54b7dbe"} Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.975206 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbzl" event={"ID":"093c506d-ed96-47c2-8e8d-c499d82381e5","Type":"ContainerStarted","Data":"a224ff518f1c7aa5347840991c494ce797e772f9a43b903a47241432a9b9c23a"} Dec 03 21:09:39 crc kubenswrapper[4765]: I1203 21:09:39.976998 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.329453 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.332018 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.346222 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.424989 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwwt\" (UniqueName: \"kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.425048 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.425274 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.527429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwwt\" (UniqueName: \"kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.527505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.527579 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.528074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.528235 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.554143 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwwt\" (UniqueName: \"kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt\") pod \"redhat-marketplace-xjqkv\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:41 crc kubenswrapper[4765]: I1203 21:09:41.712765 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:42 crc kubenswrapper[4765]: I1203 21:09:42.191517 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:43 crc kubenswrapper[4765]: W1203 21:09:43.448284 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3e5b162_d7e9_42f8_88f9_56ce13962ccd.slice/crio-d5b5d26bb1848b50272d60f61baefe55f55ba91243e097470fe4ac988948fe0c WatchSource:0}: Error finding container d5b5d26bb1848b50272d60f61baefe55f55ba91243e097470fe4ac988948fe0c: Status 404 returned error can't find the container with id d5b5d26bb1848b50272d60f61baefe55f55ba91243e097470fe4ac988948fe0c Dec 03 21:09:44 crc kubenswrapper[4765]: I1203 21:09:44.043566 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbzl" event={"ID":"093c506d-ed96-47c2-8e8d-c499d82381e5","Type":"ContainerStarted","Data":"9e893c4c720928866cc86870cc6fcbeece6fabe0770c6ec902b27ffc7f44fa19"} Dec 03 21:09:44 crc kubenswrapper[4765]: I1203 21:09:44.063730 4765 generic.go:334] "Generic (PLEG): container finished" podID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerID="370ebb813c50f8919ce4de4bb534da5cb10d5a3835a049b97db5dbdc6e820d32" exitCode=0 Dec 03 21:09:44 crc kubenswrapper[4765]: I1203 21:09:44.063780 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerDied","Data":"370ebb813c50f8919ce4de4bb534da5cb10d5a3835a049b97db5dbdc6e820d32"} Dec 03 21:09:44 crc kubenswrapper[4765]: I1203 21:09:44.063805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerStarted","Data":"d5b5d26bb1848b50272d60f61baefe55f55ba91243e097470fe4ac988948fe0c"} Dec 03 21:09:45 crc kubenswrapper[4765]: I1203 21:09:45.075749 4765 generic.go:334] "Generic (PLEG): container finished" podID="093c506d-ed96-47c2-8e8d-c499d82381e5" containerID="9e893c4c720928866cc86870cc6fcbeece6fabe0770c6ec902b27ffc7f44fa19" exitCode=0 Dec 03 21:09:45 crc kubenswrapper[4765]: I1203 21:09:45.075830 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbzl" event={"ID":"093c506d-ed96-47c2-8e8d-c499d82381e5","Type":"ContainerDied","Data":"9e893c4c720928866cc86870cc6fcbeece6fabe0770c6ec902b27ffc7f44fa19"} Dec 03 21:09:45 crc kubenswrapper[4765]: I1203 21:09:45.076357 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llbzl" event={"ID":"093c506d-ed96-47c2-8e8d-c499d82381e5","Type":"ContainerStarted","Data":"806bdbf83d90ed0ab2b4cc440193e6031af52d5882afb084d2c01982fbbb2cfe"} Dec 03 21:09:45 crc kubenswrapper[4765]: I1203 21:09:45.100008 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llbzl" podStartSLOduration=2.362870997 podStartE2EDuration="7.099990371s" podCreationTimestamp="2025-12-03 21:09:38 +0000 UTC" firstStartedPulling="2025-12-03 21:09:39.976694491 +0000 UTC m=+1877.907239642" lastFinishedPulling="2025-12-03 21:09:44.713813875 +0000 UTC m=+1882.644359016" observedRunningTime="2025-12-03 21:09:45.09343913 +0000 UTC m=+1883.023984311" watchObservedRunningTime="2025-12-03 21:09:45.099990371 +0000 UTC m=+1883.030535522" Dec 03 21:09:46 crc kubenswrapper[4765]: I1203 21:09:46.092807 4765 generic.go:334] "Generic (PLEG): container finished" podID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerID="0ac7b4b352071a7c5b3f31ca51e507021d69e889bd217e6f2ba142ebb1280f3c" exitCode=0 Dec 03 21:09:46 crc kubenswrapper[4765]: I1203 21:09:46.092887 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerDied","Data":"0ac7b4b352071a7c5b3f31ca51e507021d69e889bd217e6f2ba142ebb1280f3c"} Dec 03 21:09:47 crc kubenswrapper[4765]: I1203 21:09:47.105351 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerStarted","Data":"7e47f76ee91f02e3578f24aa5574df45585b3dc8faa044e6d22b630542a02a29"} Dec 03 21:09:47 crc kubenswrapper[4765]: I1203 21:09:47.134227 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjqkv" podStartSLOduration=3.485734325 podStartE2EDuration="6.134202982s" podCreationTimestamp="2025-12-03 21:09:41 +0000 UTC" firstStartedPulling="2025-12-03 21:09:44.070278094 +0000 UTC m=+1882.000823245" lastFinishedPulling="2025-12-03 21:09:46.718746741 +0000 UTC m=+1884.649291902" observedRunningTime="2025-12-03 21:09:47.128255188 +0000 UTC m=+1885.058800359" watchObservedRunningTime="2025-12-03 21:09:47.134202982 +0000 UTC m=+1885.064748143" Dec 03 21:09:47 crc kubenswrapper[4765]: I1203 21:09:47.359751 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:09:47 crc kubenswrapper[4765]: E1203 21:09:47.360007 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:09:49 crc kubenswrapper[4765]: I1203 21:09:49.268033 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:49 crc kubenswrapper[4765]: I1203 21:09:49.268092 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:49 crc kubenswrapper[4765]: I1203 21:09:49.315672 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:50 crc kubenswrapper[4765]: I1203 21:09:50.200362 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llbzl" Dec 03 21:09:50 crc kubenswrapper[4765]: I1203 21:09:50.335106 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llbzl"] Dec 03 21:09:50 crc kubenswrapper[4765]: I1203 21:09:50.510759 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 21:09:50 crc kubenswrapper[4765]: I1203 21:09:50.511011 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r6rmn" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="registry-server" containerID="cri-o://445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9" gracePeriod=2 Dec 03 21:09:51 crc kubenswrapper[4765]: I1203 21:09:51.713008 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:51 crc kubenswrapper[4765]: I1203 21:09:51.713413 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:51 crc kubenswrapper[4765]: I1203 21:09:51.771204 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.099262 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.152559 4765 generic.go:334] "Generic (PLEG): container finished" podID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerID="445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9" exitCode=0 Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.153633 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r6rmn" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.154144 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerDied","Data":"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9"} Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.154182 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r6rmn" event={"ID":"1feb87dd-af7d-4048-bf1c-df1541bb8301","Type":"ContainerDied","Data":"7c0e0f206b8906c05b88b1b404f264d52d5661a1b9505a974bdac4ca880f594f"} Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.154204 4765 scope.go:117] "RemoveContainer" containerID="445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.182704 4765 scope.go:117] "RemoveContainer" containerID="aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.207331 4765 scope.go:117] "RemoveContainer" containerID="fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.216045 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.256870 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwx9c\" (UniqueName: \"kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c\") pod \"1feb87dd-af7d-4048-bf1c-df1541bb8301\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.257049 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities\") pod \"1feb87dd-af7d-4048-bf1c-df1541bb8301\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.257151 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content\") pod \"1feb87dd-af7d-4048-bf1c-df1541bb8301\" (UID: \"1feb87dd-af7d-4048-bf1c-df1541bb8301\") " Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.260553 4765 scope.go:117] "RemoveContainer" containerID="445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.262537 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities" (OuterVolumeSpecName: "utilities") pod "1feb87dd-af7d-4048-bf1c-df1541bb8301" (UID: "1feb87dd-af7d-4048-bf1c-df1541bb8301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.266579 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c" (OuterVolumeSpecName: "kube-api-access-hwx9c") pod "1feb87dd-af7d-4048-bf1c-df1541bb8301" (UID: "1feb87dd-af7d-4048-bf1c-df1541bb8301"). InnerVolumeSpecName "kube-api-access-hwx9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:09:52 crc kubenswrapper[4765]: E1203 21:09:52.273451 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9\": container with ID starting with 445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9 not found: ID does not exist" containerID="445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.273502 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9"} err="failed to get container status \"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9\": rpc error: code = NotFound desc = could not find container \"445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9\": container with ID starting with 445b138373465f79a2b40330d52c18594c95c2be84a7a57b8535536a3f7d2ee9 not found: ID does not exist" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.273529 4765 scope.go:117] "RemoveContainer" containerID="aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e" Dec 03 21:09:52 crc kubenswrapper[4765]: E1203 21:09:52.277363 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e\": container with ID starting with aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e not found: ID does not exist" containerID="aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.277395 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e"} err="failed to get container status \"aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e\": rpc error: code = NotFound desc = could not find container \"aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e\": container with ID starting with aff26ee33f916422499c8110ec522886e5cc43fb097eb7d978ec0e2b6ee1b59e not found: ID does not exist" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.277410 4765 scope.go:117] "RemoveContainer" containerID="fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25" Dec 03 21:09:52 crc kubenswrapper[4765]: E1203 21:09:52.281360 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25\": container with ID starting with fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25 not found: ID does not exist" containerID="fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.281387 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25"} err="failed to get container status \"fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25\": rpc error: code = NotFound desc = could not find container \"fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25\": container with ID starting with fda2ca6891b367528b0b51c31ba837f85cd8338f4b8468a807e731d9d0845d25 not found: ID does not exist" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.338452 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1feb87dd-af7d-4048-bf1c-df1541bb8301" (UID: "1feb87dd-af7d-4048-bf1c-df1541bb8301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.358777 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.358821 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwx9c\" (UniqueName: \"kubernetes.io/projected/1feb87dd-af7d-4048-bf1c-df1541bb8301-kube-api-access-hwx9c\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.358836 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1feb87dd-af7d-4048-bf1c-df1541bb8301-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.472688 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 21:09:52 crc kubenswrapper[4765]: I1203 21:09:52.481572 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r6rmn"] Dec 03 21:09:54 crc kubenswrapper[4765]: I1203 21:09:54.121116 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:54 crc kubenswrapper[4765]: I1203 21:09:54.370220 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" path="/var/lib/kubelet/pods/1feb87dd-af7d-4048-bf1c-df1541bb8301/volumes" Dec 03 21:09:55 crc kubenswrapper[4765]: I1203 21:09:55.185342 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjqkv" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="registry-server" containerID="cri-o://7e47f76ee91f02e3578f24aa5574df45585b3dc8faa044e6d22b630542a02a29" gracePeriod=2 Dec 03 21:09:56 crc kubenswrapper[4765]: I1203 21:09:56.197189 4765 generic.go:334] "Generic (PLEG): container finished" podID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerID="7e47f76ee91f02e3578f24aa5574df45585b3dc8faa044e6d22b630542a02a29" exitCode=0 Dec 03 21:09:56 crc kubenswrapper[4765]: I1203 21:09:56.197333 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerDied","Data":"7e47f76ee91f02e3578f24aa5574df45585b3dc8faa044e6d22b630542a02a29"} Dec 03 21:09:56 crc kubenswrapper[4765]: I1203 21:09:56.943245 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.067292 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvwwt\" (UniqueName: \"kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt\") pod \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.067509 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content\") pod \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.067626 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities\") pod \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\" (UID: \"e3e5b162-d7e9-42f8-88f9-56ce13962ccd\") " Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.068257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities" (OuterVolumeSpecName: "utilities") pod "e3e5b162-d7e9-42f8-88f9-56ce13962ccd" (UID: "e3e5b162-d7e9-42f8-88f9-56ce13962ccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.087534 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt" (OuterVolumeSpecName: "kube-api-access-bvwwt") pod "e3e5b162-d7e9-42f8-88f9-56ce13962ccd" (UID: "e3e5b162-d7e9-42f8-88f9-56ce13962ccd"). InnerVolumeSpecName "kube-api-access-bvwwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.160643 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3e5b162-d7e9-42f8-88f9-56ce13962ccd" (UID: "e3e5b162-d7e9-42f8-88f9-56ce13962ccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.169524 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvwwt\" (UniqueName: \"kubernetes.io/projected/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-kube-api-access-bvwwt\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.169558 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.169569 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3e5b162-d7e9-42f8-88f9-56ce13962ccd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.206651 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqkv" event={"ID":"e3e5b162-d7e9-42f8-88f9-56ce13962ccd","Type":"ContainerDied","Data":"d5b5d26bb1848b50272d60f61baefe55f55ba91243e097470fe4ac988948fe0c"} Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.206702 4765 scope.go:117] "RemoveContainer" containerID="7e47f76ee91f02e3578f24aa5574df45585b3dc8faa044e6d22b630542a02a29" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.206724 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqkv" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.234743 4765 scope.go:117] "RemoveContainer" containerID="0ac7b4b352071a7c5b3f31ca51e507021d69e889bd217e6f2ba142ebb1280f3c" Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.241413 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.255097 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqkv"] Dec 03 21:09:57 crc kubenswrapper[4765]: I1203 21:09:57.266828 4765 scope.go:117] "RemoveContainer" containerID="370ebb813c50f8919ce4de4bb534da5cb10d5a3835a049b97db5dbdc6e820d32" Dec 03 21:09:58 crc kubenswrapper[4765]: I1203 21:09:58.376438 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" path="/var/lib/kubelet/pods/e3e5b162-d7e9-42f8-88f9-56ce13962ccd/volumes" Dec 03 21:10:02 crc kubenswrapper[4765]: I1203 21:10:02.370847 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:10:02 crc kubenswrapper[4765]: E1203 21:10:02.372122 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:10:13 crc kubenswrapper[4765]: I1203 21:10:13.359937 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:10:13 crc kubenswrapper[4765]: E1203 21:10:13.362846 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.444382 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.451366 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.466409 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.472492 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.479129 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qg2p"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.486680 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.494211 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.500905 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.510635 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xn6ln"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.518890 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.527774 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wnqgb"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.535856 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7wpct"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.543748 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dwnxd"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.551061 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wnqgb"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.556831 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7qmqd"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.562453 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2z5sl"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.567767 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d8sdp"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.573138 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z4gw4"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.579966 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf"] Dec 03 21:10:18 crc kubenswrapper[4765]: I1203 21:10:18.585961 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mdjxf"] Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.378102 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f0819b-6af9-4004-a8f4-ecb6f7eeb535" path="/var/lib/kubelet/pods/35f0819b-6af9-4004-a8f4-ecb6f7eeb535/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.380120 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c935ca3-bb76-490b-b05d-47f3d91136cb" path="/var/lib/kubelet/pods/4c935ca3-bb76-490b-b05d-47f3d91136cb/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.381618 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea4e33c-e6fc-46e9-9fb8-639e43bd000b" path="/var/lib/kubelet/pods/4ea4e33c-e6fc-46e9-9fb8-639e43bd000b/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.383027 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f554675-9b8f-47ef-88d2-21532b35bf7e" path="/var/lib/kubelet/pods/6f554675-9b8f-47ef-88d2-21532b35bf7e/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.385852 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f461fc-4404-425c-a66f-d06f2f31c027" path="/var/lib/kubelet/pods/75f461fc-4404-425c-a66f-d06f2f31c027/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.387345 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a7a88b-9262-4c20-922e-89aa3d551eff" path="/var/lib/kubelet/pods/89a7a88b-9262-4c20-922e-89aa3d551eff/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.388825 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9940e2fd-580e-4f33-99b6-5441ea17b717" path="/var/lib/kubelet/pods/9940e2fd-580e-4f33-99b6-5441ea17b717/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.391163 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0694eb-01f7-42d1-bb82-aa0c84f1df92" path="/var/lib/kubelet/pods/9e0694eb-01f7-42d1-bb82-aa0c84f1df92/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.391810 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25cfdc8-0ddd-4d7d-bda7-914d65f31caf" path="/var/lib/kubelet/pods/e25cfdc8-0ddd-4d7d-bda7-914d65f31caf/volumes" Dec 03 21:10:20 crc kubenswrapper[4765]: I1203 21:10:20.392441 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc376d4-2492-4695-882b-270070bcd17a" path="/var/lib/kubelet/pods/fcc376d4-2492-4695-882b-270070bcd17a/volumes" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.516664 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5"] Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519610 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519640 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519666 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519676 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519690 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="extract-content" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519698 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="extract-content" Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519725 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="extract-content" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519733 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="extract-content" Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519755 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="extract-utilities" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519763 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="extract-utilities" Dec 03 21:10:24 crc kubenswrapper[4765]: E1203 21:10:24.519779 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="extract-utilities" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.519786 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="extract-utilities" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.520018 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e5b162-d7e9-42f8-88f9-56ce13962ccd" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.520034 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1feb87dd-af7d-4048-bf1c-df1541bb8301" containerName="registry-server" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.521105 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.523292 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.524670 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.524833 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.525068 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.525240 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.529571 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5"] Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.696675 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.697024 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhpg\" (UniqueName: \"kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.697112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.697140 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.697190 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.799110 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.799173 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.799227 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhpg\" (UniqueName: \"kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.799322 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.799363 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.805763 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.807018 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.807059 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.812747 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.818649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhpg\" (UniqueName: \"kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:24 crc kubenswrapper[4765]: I1203 21:10:24.855057 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:25 crc kubenswrapper[4765]: I1203 21:10:25.360268 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:10:25 crc kubenswrapper[4765]: E1203 21:10:25.360879 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:10:25 crc kubenswrapper[4765]: I1203 21:10:25.444230 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5"] Dec 03 21:10:25 crc kubenswrapper[4765]: W1203 21:10:25.445636 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47b92082_05ae_430d_bdfd_836be92480a8.slice/crio-79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47 WatchSource:0}: Error finding container 79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47: Status 404 returned error can't find the container with id 79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47 Dec 03 21:10:25 crc kubenswrapper[4765]: I1203 21:10:25.508715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" event={"ID":"47b92082-05ae-430d-bdfd-836be92480a8","Type":"ContainerStarted","Data":"79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47"} Dec 03 21:10:27 crc kubenswrapper[4765]: I1203 21:10:27.527522 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" event={"ID":"47b92082-05ae-430d-bdfd-836be92480a8","Type":"ContainerStarted","Data":"9dc44a78e08165bfdd291210ed2d877fdaaa088a3b0ec466f6ec54ec93aa2f12"} Dec 03 21:10:27 crc kubenswrapper[4765]: I1203 21:10:27.543942 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" podStartSLOduration=2.591069484 podStartE2EDuration="3.543923878s" podCreationTimestamp="2025-12-03 21:10:24 +0000 UTC" firstStartedPulling="2025-12-03 21:10:25.448230777 +0000 UTC m=+1923.378775928" lastFinishedPulling="2025-12-03 21:10:26.401085171 +0000 UTC m=+1924.331630322" observedRunningTime="2025-12-03 21:10:27.543287259 +0000 UTC m=+1925.473832430" watchObservedRunningTime="2025-12-03 21:10:27.543923878 +0000 UTC m=+1925.474469049" Dec 03 21:10:29 crc kubenswrapper[4765]: I1203 21:10:29.395030 4765 scope.go:117] "RemoveContainer" containerID="7e7c228df01cd5d7a1471aa2ddcf59079b45071c862cb9a22415431172750bf3" Dec 03 21:10:29 crc kubenswrapper[4765]: I1203 21:10:29.443618 4765 scope.go:117] "RemoveContainer" containerID="c2075be5eba25c8076017253b086c4011349b8217676b8025d2271fd8e2c16c2" Dec 03 21:10:29 crc kubenswrapper[4765]: I1203 21:10:29.477886 4765 scope.go:117] "RemoveContainer" containerID="6700fa303c83eb1605307888580ca25c6fa9918f8c85ad4c34180d1ac296aaa8" Dec 03 21:10:29 crc kubenswrapper[4765]: I1203 21:10:29.572718 4765 scope.go:117] "RemoveContainer" containerID="4b32dc937e49e91d34eddbc2059d144f4788e12e59d0edcec647f2923f6b361e" Dec 03 21:10:29 crc kubenswrapper[4765]: I1203 21:10:29.642988 4765 scope.go:117] "RemoveContainer" containerID="33d200e78c618970df8cf59f7454eed8051c111922e41d2eeb0b2e2b8f43c356" Dec 03 21:10:38 crc kubenswrapper[4765]: I1203 21:10:38.360264 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:10:38 crc kubenswrapper[4765]: E1203 21:10:38.361087 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:10:38 crc kubenswrapper[4765]: I1203 21:10:38.646788 4765 generic.go:334] "Generic (PLEG): container finished" podID="47b92082-05ae-430d-bdfd-836be92480a8" containerID="9dc44a78e08165bfdd291210ed2d877fdaaa088a3b0ec466f6ec54ec93aa2f12" exitCode=0 Dec 03 21:10:38 crc kubenswrapper[4765]: I1203 21:10:38.646833 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" event={"ID":"47b92082-05ae-430d-bdfd-836be92480a8","Type":"ContainerDied","Data":"9dc44a78e08165bfdd291210ed2d877fdaaa088a3b0ec466f6ec54ec93aa2f12"} Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.112016 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.314750 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory\") pod \"47b92082-05ae-430d-bdfd-836be92480a8\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.314934 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph\") pod \"47b92082-05ae-430d-bdfd-836be92480a8\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.315575 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key\") pod \"47b92082-05ae-430d-bdfd-836be92480a8\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.315728 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle\") pod \"47b92082-05ae-430d-bdfd-836be92480a8\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.315792 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvhpg\" (UniqueName: \"kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg\") pod \"47b92082-05ae-430d-bdfd-836be92480a8\" (UID: \"47b92082-05ae-430d-bdfd-836be92480a8\") " Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.320326 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg" (OuterVolumeSpecName: "kube-api-access-bvhpg") pod "47b92082-05ae-430d-bdfd-836be92480a8" (UID: "47b92082-05ae-430d-bdfd-836be92480a8"). InnerVolumeSpecName "kube-api-access-bvhpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.321002 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph" (OuterVolumeSpecName: "ceph") pod "47b92082-05ae-430d-bdfd-836be92480a8" (UID: "47b92082-05ae-430d-bdfd-836be92480a8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.321781 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "47b92082-05ae-430d-bdfd-836be92480a8" (UID: "47b92082-05ae-430d-bdfd-836be92480a8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.350117 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "47b92082-05ae-430d-bdfd-836be92480a8" (UID: "47b92082-05ae-430d-bdfd-836be92480a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.350642 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory" (OuterVolumeSpecName: "inventory") pod "47b92082-05ae-430d-bdfd-836be92480a8" (UID: "47b92082-05ae-430d-bdfd-836be92480a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.418139 4765 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.418173 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvhpg\" (UniqueName: \"kubernetes.io/projected/47b92082-05ae-430d-bdfd-836be92480a8-kube-api-access-bvhpg\") on node \"crc\" DevicePath \"\"" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.418187 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.418199 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.418211 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47b92082-05ae-430d-bdfd-836be92480a8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.671794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" event={"ID":"47b92082-05ae-430d-bdfd-836be92480a8","Type":"ContainerDied","Data":"79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47"} Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.671856 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d761623c3cec6252cc8404fd9770f389efb0bcf3fad86c0cd4fd9112da4d47" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.671933 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.810844 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx"] Dec 03 21:10:40 crc kubenswrapper[4765]: E1203 21:10:40.811262 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b92082-05ae-430d-bdfd-836be92480a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.811283 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b92082-05ae-430d-bdfd-836be92480a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.811512 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b92082-05ae-430d-bdfd-836be92480a8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.812352 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.815386 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.815417 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.815432 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.816774 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.817867 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.821744 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx"] Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.838040 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.838112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.838166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.838230 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.838261 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthvz\" (UniqueName: \"kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.949407 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.949745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.949837 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.949874 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nthvz\" (UniqueName: \"kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.949921 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.953757 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.953826 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.954115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.954239 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:40 crc kubenswrapper[4765]: I1203 21:10:40.967556 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nthvz\" (UniqueName: \"kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:41 crc kubenswrapper[4765]: I1203 21:10:41.146794 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:10:41 crc kubenswrapper[4765]: I1203 21:10:41.768012 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx"] Dec 03 21:10:42 crc kubenswrapper[4765]: I1203 21:10:42.707514 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" event={"ID":"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32","Type":"ContainerStarted","Data":"2e2cd3fccbe026d89789bd7c7b7d0df428911f2973b07bea1c70e0ae2685eb6a"} Dec 03 21:10:42 crc kubenswrapper[4765]: I1203 21:10:42.707963 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" event={"ID":"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32","Type":"ContainerStarted","Data":"91a7f5cb379b7fea36997af14374e9e0cfc6990f5743c0dcb099a7c81774c6e5"} Dec 03 21:10:42 crc kubenswrapper[4765]: I1203 21:10:42.736963 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" podStartSLOduration=2.344816393 podStartE2EDuration="2.736943087s" podCreationTimestamp="2025-12-03 21:10:40 +0000 UTC" firstStartedPulling="2025-12-03 21:10:41.801769543 +0000 UTC m=+1939.732314704" lastFinishedPulling="2025-12-03 21:10:42.193896247 +0000 UTC m=+1940.124441398" observedRunningTime="2025-12-03 21:10:42.726181075 +0000 UTC m=+1940.656726256" watchObservedRunningTime="2025-12-03 21:10:42.736943087 +0000 UTC m=+1940.667488238" Dec 03 21:10:50 crc kubenswrapper[4765]: I1203 21:10:50.359852 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:10:50 crc kubenswrapper[4765]: E1203 21:10:50.360668 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:11:04 crc kubenswrapper[4765]: I1203 21:11:04.360905 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:11:04 crc kubenswrapper[4765]: E1203 21:11:04.362052 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:11:18 crc kubenswrapper[4765]: I1203 21:11:18.359482 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:11:18 crc kubenswrapper[4765]: E1203 21:11:18.360280 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:11:29 crc kubenswrapper[4765]: I1203 21:11:29.878329 4765 scope.go:117] "RemoveContainer" containerID="a586b908aa604624dcecc2ff285e3675cd688e91280e4baa231f10d4a41a04f9" Dec 03 21:11:29 crc kubenswrapper[4765]: I1203 21:11:29.922987 4765 scope.go:117] "RemoveContainer" containerID="4f9590658e87331fde740636a35d5979133b02bd26f65e265d246f5028b9174c" Dec 03 21:11:30 crc kubenswrapper[4765]: I1203 21:11:30.360092 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:11:31 crc kubenswrapper[4765]: I1203 21:11:31.234395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b"} Dec 03 21:12:25 crc kubenswrapper[4765]: I1203 21:12:25.787490 4765 generic.go:334] "Generic (PLEG): container finished" podID="e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" containerID="2e2cd3fccbe026d89789bd7c7b7d0df428911f2973b07bea1c70e0ae2685eb6a" exitCode=0 Dec 03 21:12:25 crc kubenswrapper[4765]: I1203 21:12:25.787582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" event={"ID":"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32","Type":"ContainerDied","Data":"2e2cd3fccbe026d89789bd7c7b7d0df428911f2973b07bea1c70e0ae2685eb6a"} Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.367386 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.480145 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle\") pod \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.480234 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nthvz\" (UniqueName: \"kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz\") pod \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.480277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key\") pod \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.480347 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph\") pod \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.480451 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory\") pod \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\" (UID: \"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32\") " Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.486530 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz" (OuterVolumeSpecName: "kube-api-access-nthvz") pod "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" (UID: "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32"). InnerVolumeSpecName "kube-api-access-nthvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.486905 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" (UID: "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.495387 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph" (OuterVolumeSpecName: "ceph") pod "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" (UID: "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.507054 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" (UID: "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.514219 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory" (OuterVolumeSpecName: "inventory") pod "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" (UID: "e3b2c2f7-5ef3-47e1-bb0e-3298074acb32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.582620 4765 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.582656 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nthvz\" (UniqueName: \"kubernetes.io/projected/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-kube-api-access-nthvz\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.582666 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.582675 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.582683 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3b2c2f7-5ef3-47e1-bb0e-3298074acb32-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.813640 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" event={"ID":"e3b2c2f7-5ef3-47e1-bb0e-3298074acb32","Type":"ContainerDied","Data":"91a7f5cb379b7fea36997af14374e9e0cfc6990f5743c0dcb099a7c81774c6e5"} Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.814024 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a7f5cb379b7fea36997af14374e9e0cfc6990f5743c0dcb099a7c81774c6e5" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.813793 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.941273 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7"] Dec 03 21:12:27 crc kubenswrapper[4765]: E1203 21:12:27.941879 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.941914 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.942231 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b2c2f7-5ef3-47e1-bb0e-3298074acb32" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.943229 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.952877 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.953023 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.953111 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.953204 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.956762 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:12:27 crc kubenswrapper[4765]: I1203 21:12:27.968694 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7"] Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.090868 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh894\" (UniqueName: \"kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.090949 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.091003 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.091058 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.192466 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.192646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh894\" (UniqueName: \"kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.192746 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.192788 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.203878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.203900 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.204448 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.220435 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh894\" (UniqueName: \"kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.266591 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:28 crc kubenswrapper[4765]: I1203 21:12:28.873454 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7"] Dec 03 21:12:28 crc kubenswrapper[4765]: W1203 21:12:28.882406 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc766674f_ed9a_4a8c_8c83_a94542469c60.slice/crio-174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4 WatchSource:0}: Error finding container 174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4: Status 404 returned error can't find the container with id 174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4 Dec 03 21:12:29 crc kubenswrapper[4765]: I1203 21:12:29.836327 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" event={"ID":"c766674f-ed9a-4a8c-8c83-a94542469c60","Type":"ContainerStarted","Data":"e0c938c2b5e1d2ab35bc748bfcb864f2f70ed0b1f631c76ad69baa92746622e5"} Dec 03 21:12:29 crc kubenswrapper[4765]: I1203 21:12:29.837068 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" event={"ID":"c766674f-ed9a-4a8c-8c83-a94542469c60","Type":"ContainerStarted","Data":"174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4"} Dec 03 21:12:29 crc kubenswrapper[4765]: I1203 21:12:29.867404 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" podStartSLOduration=2.380091073 podStartE2EDuration="2.867381524s" podCreationTimestamp="2025-12-03 21:12:27 +0000 UTC" firstStartedPulling="2025-12-03 21:12:28.884746615 +0000 UTC m=+2046.815291806" lastFinishedPulling="2025-12-03 21:12:29.372037086 +0000 UTC m=+2047.302582257" observedRunningTime="2025-12-03 21:12:29.852842711 +0000 UTC m=+2047.783387902" watchObservedRunningTime="2025-12-03 21:12:29.867381524 +0000 UTC m=+2047.797926695" Dec 03 21:12:30 crc kubenswrapper[4765]: I1203 21:12:30.036680 4765 scope.go:117] "RemoveContainer" containerID="2d85f2a30d09624e37d57e37c48f052558099a3b0fd37a98c0b9247808ac290c" Dec 03 21:12:30 crc kubenswrapper[4765]: I1203 21:12:30.094031 4765 scope.go:117] "RemoveContainer" containerID="e8e15e8613a6d9429f82e232ea9c52d518813ac0458fba5384d975aecb96bec6" Dec 03 21:12:30 crc kubenswrapper[4765]: I1203 21:12:30.134663 4765 scope.go:117] "RemoveContainer" containerID="4181cab5c2737767eaa48a54aafd4037fc169ecfb8cda7fec6d498c8f6e3a204" Dec 03 21:12:56 crc kubenswrapper[4765]: I1203 21:12:56.134660 4765 generic.go:334] "Generic (PLEG): container finished" podID="c766674f-ed9a-4a8c-8c83-a94542469c60" containerID="e0c938c2b5e1d2ab35bc748bfcb864f2f70ed0b1f631c76ad69baa92746622e5" exitCode=0 Dec 03 21:12:56 crc kubenswrapper[4765]: I1203 21:12:56.134741 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" event={"ID":"c766674f-ed9a-4a8c-8c83-a94542469c60","Type":"ContainerDied","Data":"e0c938c2b5e1d2ab35bc748bfcb864f2f70ed0b1f631c76ad69baa92746622e5"} Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.595525 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.685278 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh894\" (UniqueName: \"kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894\") pod \"c766674f-ed9a-4a8c-8c83-a94542469c60\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.685656 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory\") pod \"c766674f-ed9a-4a8c-8c83-a94542469c60\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.685687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph\") pod \"c766674f-ed9a-4a8c-8c83-a94542469c60\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.685738 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key\") pod \"c766674f-ed9a-4a8c-8c83-a94542469c60\" (UID: \"c766674f-ed9a-4a8c-8c83-a94542469c60\") " Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.691097 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph" (OuterVolumeSpecName: "ceph") pod "c766674f-ed9a-4a8c-8c83-a94542469c60" (UID: "c766674f-ed9a-4a8c-8c83-a94542469c60"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.692224 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894" (OuterVolumeSpecName: "kube-api-access-bh894") pod "c766674f-ed9a-4a8c-8c83-a94542469c60" (UID: "c766674f-ed9a-4a8c-8c83-a94542469c60"). InnerVolumeSpecName "kube-api-access-bh894". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.712153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory" (OuterVolumeSpecName: "inventory") pod "c766674f-ed9a-4a8c-8c83-a94542469c60" (UID: "c766674f-ed9a-4a8c-8c83-a94542469c60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.719438 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c766674f-ed9a-4a8c-8c83-a94542469c60" (UID: "c766674f-ed9a-4a8c-8c83-a94542469c60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.788112 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh894\" (UniqueName: \"kubernetes.io/projected/c766674f-ed9a-4a8c-8c83-a94542469c60-kube-api-access-bh894\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.788468 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.788611 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:57 crc kubenswrapper[4765]: I1203 21:12:57.788737 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c766674f-ed9a-4a8c-8c83-a94542469c60-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.159480 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" event={"ID":"c766674f-ed9a-4a8c-8c83-a94542469c60","Type":"ContainerDied","Data":"174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4"} Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.159538 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174e5b9218a9dfe0aa0188fc52ab8be16bb38a7c8f0056ed75e2299ca12adac4" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.159519 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.258646 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t"] Dec 03 21:12:58 crc kubenswrapper[4765]: E1203 21:12:58.259080 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c766674f-ed9a-4a8c-8c83-a94542469c60" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.259104 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c766674f-ed9a-4a8c-8c83-a94542469c60" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.259331 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c766674f-ed9a-4a8c-8c83-a94542469c60" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.260044 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.262168 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.262196 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.262488 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.262508 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.263128 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.268916 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t"] Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.297554 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.297628 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.297693 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mrd\" (UniqueName: \"kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.297823 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.399225 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.399417 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.399494 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.399594 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5mrd\" (UniqueName: \"kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.403157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.404855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.410755 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.419670 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5mrd\" (UniqueName: \"kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:58 crc kubenswrapper[4765]: I1203 21:12:58.578916 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:12:59 crc kubenswrapper[4765]: I1203 21:12:59.298373 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t"] Dec 03 21:13:00 crc kubenswrapper[4765]: I1203 21:13:00.178795 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" event={"ID":"db43dd3d-a5b5-4cc3-bfbd-18689908b450","Type":"ContainerStarted","Data":"a13788fe9472ff08381975472985271b067f7da922a098fa08e60393e0c64734"} Dec 03 21:13:01 crc kubenswrapper[4765]: I1203 21:13:01.191897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" event={"ID":"db43dd3d-a5b5-4cc3-bfbd-18689908b450","Type":"ContainerStarted","Data":"e727804cbf9341392cfde823d9c2a84b7a39873369621040e81bd7a74703058f"} Dec 03 21:13:05 crc kubenswrapper[4765]: I1203 21:13:05.240063 4765 generic.go:334] "Generic (PLEG): container finished" podID="db43dd3d-a5b5-4cc3-bfbd-18689908b450" containerID="e727804cbf9341392cfde823d9c2a84b7a39873369621040e81bd7a74703058f" exitCode=0 Dec 03 21:13:05 crc kubenswrapper[4765]: I1203 21:13:05.240136 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" event={"ID":"db43dd3d-a5b5-4cc3-bfbd-18689908b450","Type":"ContainerDied","Data":"e727804cbf9341392cfde823d9c2a84b7a39873369621040e81bd7a74703058f"} Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.682307 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.808772 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph\") pod \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.809014 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key\") pod \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.809110 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5mrd\" (UniqueName: \"kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd\") pod \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.809171 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory\") pod \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\" (UID: \"db43dd3d-a5b5-4cc3-bfbd-18689908b450\") " Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.820026 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd" (OuterVolumeSpecName: "kube-api-access-x5mrd") pod "db43dd3d-a5b5-4cc3-bfbd-18689908b450" (UID: "db43dd3d-a5b5-4cc3-bfbd-18689908b450"). InnerVolumeSpecName "kube-api-access-x5mrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.827698 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph" (OuterVolumeSpecName: "ceph") pod "db43dd3d-a5b5-4cc3-bfbd-18689908b450" (UID: "db43dd3d-a5b5-4cc3-bfbd-18689908b450"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.834921 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory" (OuterVolumeSpecName: "inventory") pod "db43dd3d-a5b5-4cc3-bfbd-18689908b450" (UID: "db43dd3d-a5b5-4cc3-bfbd-18689908b450"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.840797 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db43dd3d-a5b5-4cc3-bfbd-18689908b450" (UID: "db43dd3d-a5b5-4cc3-bfbd-18689908b450"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.911450 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.911486 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.911501 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5mrd\" (UniqueName: \"kubernetes.io/projected/db43dd3d-a5b5-4cc3-bfbd-18689908b450-kube-api-access-x5mrd\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:06 crc kubenswrapper[4765]: I1203 21:13:06.911514 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db43dd3d-a5b5-4cc3-bfbd-18689908b450-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.264243 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" event={"ID":"db43dd3d-a5b5-4cc3-bfbd-18689908b450","Type":"ContainerDied","Data":"a13788fe9472ff08381975472985271b067f7da922a098fa08e60393e0c64734"} Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.264352 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13788fe9472ff08381975472985271b067f7da922a098fa08e60393e0c64734" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.264363 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.386445 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn"] Dec 03 21:13:07 crc kubenswrapper[4765]: E1203 21:13:07.386963 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43dd3d-a5b5-4cc3-bfbd-18689908b450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.387001 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43dd3d-a5b5-4cc3-bfbd-18689908b450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.387361 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43dd3d-a5b5-4cc3-bfbd-18689908b450" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.388200 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.395199 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.395460 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.395711 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.395863 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.396072 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.401060 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn"] Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.522324 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.522408 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5445\" (UniqueName: \"kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.522538 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.522610 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.624672 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.624753 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.624798 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.624834 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5445\" (UniqueName: \"kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.628566 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.640117 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.640368 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.641353 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5445\" (UniqueName: \"kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p2brn\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:07 crc kubenswrapper[4765]: I1203 21:13:07.726971 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:08 crc kubenswrapper[4765]: I1203 21:13:08.253970 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn"] Dec 03 21:13:08 crc kubenswrapper[4765]: I1203 21:13:08.273092 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" event={"ID":"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52","Type":"ContainerStarted","Data":"60a0b0b38cdffc332f35ecccbbb4a27fd2f1105792e2dd27e80300fb38e11f1f"} Dec 03 21:13:09 crc kubenswrapper[4765]: I1203 21:13:09.283054 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" event={"ID":"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52","Type":"ContainerStarted","Data":"48d0db190c37a588836cb65f729bcbd845dea2e7450555bc91676ac6ddd40332"} Dec 03 21:13:09 crc kubenswrapper[4765]: I1203 21:13:09.305909 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" podStartSLOduration=1.790685156 podStartE2EDuration="2.305851446s" podCreationTimestamp="2025-12-03 21:13:07 +0000 UTC" firstStartedPulling="2025-12-03 21:13:08.260211202 +0000 UTC m=+2086.190756353" lastFinishedPulling="2025-12-03 21:13:08.775377452 +0000 UTC m=+2086.705922643" observedRunningTime="2025-12-03 21:13:09.301598371 +0000 UTC m=+2087.232143562" watchObservedRunningTime="2025-12-03 21:13:09.305851446 +0000 UTC m=+2087.236396617" Dec 03 21:13:46 crc kubenswrapper[4765]: I1203 21:13:46.607648 4765 generic.go:334] "Generic (PLEG): container finished" podID="b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" containerID="48d0db190c37a588836cb65f729bcbd845dea2e7450555bc91676ac6ddd40332" exitCode=0 Dec 03 21:13:46 crc kubenswrapper[4765]: I1203 21:13:46.607755 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" event={"ID":"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52","Type":"ContainerDied","Data":"48d0db190c37a588836cb65f729bcbd845dea2e7450555bc91676ac6ddd40332"} Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.044575 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.216144 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key\") pod \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.216218 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph\") pod \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.216367 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5445\" (UniqueName: \"kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445\") pod \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.216447 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory\") pod \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\" (UID: \"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52\") " Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.221678 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph" (OuterVolumeSpecName: "ceph") pod "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" (UID: "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.222635 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445" (OuterVolumeSpecName: "kube-api-access-x5445") pod "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" (UID: "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52"). InnerVolumeSpecName "kube-api-access-x5445". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.244136 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory" (OuterVolumeSpecName: "inventory") pod "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" (UID: "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.249749 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" (UID: "b6cce00b-a9f8-4d5e-abbf-0e72ce498b52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.318725 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.318768 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.318780 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.318789 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5445\" (UniqueName: \"kubernetes.io/projected/b6cce00b-a9f8-4d5e-abbf-0e72ce498b52-kube-api-access-x5445\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.643582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" event={"ID":"b6cce00b-a9f8-4d5e-abbf-0e72ce498b52","Type":"ContainerDied","Data":"60a0b0b38cdffc332f35ecccbbb4a27fd2f1105792e2dd27e80300fb38e11f1f"} Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.644044 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a0b0b38cdffc332f35ecccbbb4a27fd2f1105792e2dd27e80300fb38e11f1f" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.643696 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p2brn" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.740548 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4"] Dec 03 21:13:48 crc kubenswrapper[4765]: E1203 21:13:48.740931 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.740949 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.741116 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cce00b-a9f8-4d5e-abbf-0e72ce498b52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.741744 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.744372 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.744431 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.747215 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.747285 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.747321 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.757903 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4"] Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.929019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.929074 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.929152 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxh5\" (UniqueName: \"kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:48 crc kubenswrapper[4765]: I1203 21:13:48.929212 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.031460 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.031845 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.031894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.031994 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxh5\" (UniqueName: \"kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.038400 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.038809 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.039403 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.050735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxh5\" (UniqueName: \"kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.109828 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.645707 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4"] Dec 03 21:13:49 crc kubenswrapper[4765]: I1203 21:13:49.655053 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" event={"ID":"f56cb10b-3bc1-42b6-90e6-8d1802c20167","Type":"ContainerStarted","Data":"1aacb3078e1a6cbce7ddbfd4d72e0a04f1b528b8c797a06ced5a507c88d01d51"} Dec 03 21:13:50 crc kubenswrapper[4765]: I1203 21:13:50.681076 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" event={"ID":"f56cb10b-3bc1-42b6-90e6-8d1802c20167","Type":"ContainerStarted","Data":"f20c89f79326d25274ca0e87cee1a3952fcba633056435a2f2a3ccb8c9866819"} Dec 03 21:13:50 crc kubenswrapper[4765]: I1203 21:13:50.714870 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" podStartSLOduration=2.31264184 podStartE2EDuration="2.714845661s" podCreationTimestamp="2025-12-03 21:13:48 +0000 UTC" firstStartedPulling="2025-12-03 21:13:49.648796054 +0000 UTC m=+2127.579341215" lastFinishedPulling="2025-12-03 21:13:50.050999855 +0000 UTC m=+2127.981545036" observedRunningTime="2025-12-03 21:13:50.698649472 +0000 UTC m=+2128.629194673" watchObservedRunningTime="2025-12-03 21:13:50.714845661 +0000 UTC m=+2128.645390812" Dec 03 21:13:54 crc kubenswrapper[4765]: I1203 21:13:54.735482 4765 generic.go:334] "Generic (PLEG): container finished" podID="f56cb10b-3bc1-42b6-90e6-8d1802c20167" containerID="f20c89f79326d25274ca0e87cee1a3952fcba633056435a2f2a3ccb8c9866819" exitCode=0 Dec 03 21:13:54 crc kubenswrapper[4765]: I1203 21:13:54.735570 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" event={"ID":"f56cb10b-3bc1-42b6-90e6-8d1802c20167","Type":"ContainerDied","Data":"f20c89f79326d25274ca0e87cee1a3952fcba633056435a2f2a3ccb8c9866819"} Dec 03 21:13:54 crc kubenswrapper[4765]: I1203 21:13:54.798379 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:13:54 crc kubenswrapper[4765]: I1203 21:13:54.798476 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.165409 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.311825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph\") pod \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.311873 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory\") pod \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.311960 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttxh5\" (UniqueName: \"kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5\") pod \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.312031 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key\") pod \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\" (UID: \"f56cb10b-3bc1-42b6-90e6-8d1802c20167\") " Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.318546 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph" (OuterVolumeSpecName: "ceph") pod "f56cb10b-3bc1-42b6-90e6-8d1802c20167" (UID: "f56cb10b-3bc1-42b6-90e6-8d1802c20167"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.326520 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5" (OuterVolumeSpecName: "kube-api-access-ttxh5") pod "f56cb10b-3bc1-42b6-90e6-8d1802c20167" (UID: "f56cb10b-3bc1-42b6-90e6-8d1802c20167"). InnerVolumeSpecName "kube-api-access-ttxh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.346826 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f56cb10b-3bc1-42b6-90e6-8d1802c20167" (UID: "f56cb10b-3bc1-42b6-90e6-8d1802c20167"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.360786 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory" (OuterVolumeSpecName: "inventory") pod "f56cb10b-3bc1-42b6-90e6-8d1802c20167" (UID: "f56cb10b-3bc1-42b6-90e6-8d1802c20167"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.415867 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.415921 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.415943 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttxh5\" (UniqueName: \"kubernetes.io/projected/f56cb10b-3bc1-42b6-90e6-8d1802c20167-kube-api-access-ttxh5\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.415962 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f56cb10b-3bc1-42b6-90e6-8d1802c20167-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.760982 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" event={"ID":"f56cb10b-3bc1-42b6-90e6-8d1802c20167","Type":"ContainerDied","Data":"1aacb3078e1a6cbce7ddbfd4d72e0a04f1b528b8c797a06ced5a507c88d01d51"} Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.761037 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aacb3078e1a6cbce7ddbfd4d72e0a04f1b528b8c797a06ced5a507c88d01d51" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.761013 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.843952 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw"] Dec 03 21:13:56 crc kubenswrapper[4765]: E1203 21:13:56.844374 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56cb10b-3bc1-42b6-90e6-8d1802c20167" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.844397 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56cb10b-3bc1-42b6-90e6-8d1802c20167" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.844607 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56cb10b-3bc1-42b6-90e6-8d1802c20167" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.845274 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.848918 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.848941 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.849072 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.849237 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.849328 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:13:56 crc kubenswrapper[4765]: I1203 21:13:56.855066 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw"] Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.025434 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.025768 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z57q\" (UniqueName: \"kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.025887 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.026031 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.128373 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z57q\" (UniqueName: \"kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.128479 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.128541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.128658 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.132875 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.133965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.137105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.146877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z57q\" (UniqueName: \"kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45dqw\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.171874 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:13:57 crc kubenswrapper[4765]: I1203 21:13:57.784106 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw"] Dec 03 21:13:58 crc kubenswrapper[4765]: I1203 21:13:58.789255 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" event={"ID":"0acda383-efb7-45c7-8ead-19f3bb2bac36","Type":"ContainerStarted","Data":"504b018bc405377bfb03f863f71d3fb010b5df91a27f47e7fe1505e8b5d6d217"} Dec 03 21:13:58 crc kubenswrapper[4765]: I1203 21:13:58.789747 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" event={"ID":"0acda383-efb7-45c7-8ead-19f3bb2bac36","Type":"ContainerStarted","Data":"006c20f50e75c41e32e37abbe46e20e2bbbc357aa4e78da4bea4609055dcf363"} Dec 03 21:13:58 crc kubenswrapper[4765]: I1203 21:13:58.814048 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" podStartSLOduration=2.350192001 podStartE2EDuration="2.814030511s" podCreationTimestamp="2025-12-03 21:13:56 +0000 UTC" firstStartedPulling="2025-12-03 21:13:57.79507608 +0000 UTC m=+2135.725621231" lastFinishedPulling="2025-12-03 21:13:58.25891459 +0000 UTC m=+2136.189459741" observedRunningTime="2025-12-03 21:13:58.812901581 +0000 UTC m=+2136.743446772" watchObservedRunningTime="2025-12-03 21:13:58.814030511 +0000 UTC m=+2136.744575662" Dec 03 21:14:24 crc kubenswrapper[4765]: I1203 21:14:24.798253 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:14:24 crc kubenswrapper[4765]: I1203 21:14:24.798966 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:14:44 crc kubenswrapper[4765]: I1203 21:14:44.185033 4765 generic.go:334] "Generic (PLEG): container finished" podID="0acda383-efb7-45c7-8ead-19f3bb2bac36" containerID="504b018bc405377bfb03f863f71d3fb010b5df91a27f47e7fe1505e8b5d6d217" exitCode=0 Dec 03 21:14:44 crc kubenswrapper[4765]: I1203 21:14:44.185146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" event={"ID":"0acda383-efb7-45c7-8ead-19f3bb2bac36","Type":"ContainerDied","Data":"504b018bc405377bfb03f863f71d3fb010b5df91a27f47e7fe1505e8b5d6d217"} Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.671964 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.729039 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z57q\" (UniqueName: \"kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q\") pod \"0acda383-efb7-45c7-8ead-19f3bb2bac36\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.729157 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph\") pod \"0acda383-efb7-45c7-8ead-19f3bb2bac36\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.729196 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory\") pod \"0acda383-efb7-45c7-8ead-19f3bb2bac36\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.729412 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key\") pod \"0acda383-efb7-45c7-8ead-19f3bb2bac36\" (UID: \"0acda383-efb7-45c7-8ead-19f3bb2bac36\") " Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.736706 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q" (OuterVolumeSpecName: "kube-api-access-8z57q") pod "0acda383-efb7-45c7-8ead-19f3bb2bac36" (UID: "0acda383-efb7-45c7-8ead-19f3bb2bac36"). InnerVolumeSpecName "kube-api-access-8z57q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.737346 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph" (OuterVolumeSpecName: "ceph") pod "0acda383-efb7-45c7-8ead-19f3bb2bac36" (UID: "0acda383-efb7-45c7-8ead-19f3bb2bac36"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.758979 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0acda383-efb7-45c7-8ead-19f3bb2bac36" (UID: "0acda383-efb7-45c7-8ead-19f3bb2bac36"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.763696 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory" (OuterVolumeSpecName: "inventory") pod "0acda383-efb7-45c7-8ead-19f3bb2bac36" (UID: "0acda383-efb7-45c7-8ead-19f3bb2bac36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.833757 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z57q\" (UniqueName: \"kubernetes.io/projected/0acda383-efb7-45c7-8ead-19f3bb2bac36-kube-api-access-8z57q\") on node \"crc\" DevicePath \"\"" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.833802 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.833813 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:14:45 crc kubenswrapper[4765]: I1203 21:14:45.833823 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0acda383-efb7-45c7-8ead-19f3bb2bac36-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.208318 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" event={"ID":"0acda383-efb7-45c7-8ead-19f3bb2bac36","Type":"ContainerDied","Data":"006c20f50e75c41e32e37abbe46e20e2bbbc357aa4e78da4bea4609055dcf363"} Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.208828 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006c20f50e75c41e32e37abbe46e20e2bbbc357aa4e78da4bea4609055dcf363" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.208550 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45dqw" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.313955 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sxk7z"] Dec 03 21:14:46 crc kubenswrapper[4765]: E1203 21:14:46.314387 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acda383-efb7-45c7-8ead-19f3bb2bac36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.314404 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acda383-efb7-45c7-8ead-19f3bb2bac36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.314573 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acda383-efb7-45c7-8ead-19f3bb2bac36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.315145 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.318169 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.318220 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.318246 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.318394 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.318502 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.334676 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sxk7z"] Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.343712 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxn8\" (UniqueName: \"kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.343755 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.343816 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.343842 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.444945 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxn8\" (UniqueName: \"kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.444989 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.445065 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.445111 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.452721 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.453386 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.458649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.460579 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxn8\" (UniqueName: \"kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8\") pod \"ssh-known-hosts-edpm-deployment-sxk7z\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:46 crc kubenswrapper[4765]: I1203 21:14:46.643111 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:47 crc kubenswrapper[4765]: I1203 21:14:47.226839 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sxk7z"] Dec 03 21:14:47 crc kubenswrapper[4765]: I1203 21:14:47.256591 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:14:48 crc kubenswrapper[4765]: I1203 21:14:48.229159 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" event={"ID":"d1f5d4f1-df58-457b-b56a-64c6cda175a4","Type":"ContainerStarted","Data":"7f50d9cfd6f22e2d023bb7a8807cffb0d910bd1a5ee0674e74b2000d1453c09d"} Dec 03 21:14:48 crc kubenswrapper[4765]: I1203 21:14:48.229521 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" event={"ID":"d1f5d4f1-df58-457b-b56a-64c6cda175a4","Type":"ContainerStarted","Data":"b937e156d306f93fe67671399741aedf5ae9d6b960db049802651642c360c8d1"} Dec 03 21:14:48 crc kubenswrapper[4765]: I1203 21:14:48.251135 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" podStartSLOduration=1.750682019 podStartE2EDuration="2.251118211s" podCreationTimestamp="2025-12-03 21:14:46 +0000 UTC" firstStartedPulling="2025-12-03 21:14:47.256212411 +0000 UTC m=+2185.186757592" lastFinishedPulling="2025-12-03 21:14:47.756648622 +0000 UTC m=+2185.687193784" observedRunningTime="2025-12-03 21:14:48.24623092 +0000 UTC m=+2186.176776071" watchObservedRunningTime="2025-12-03 21:14:48.251118211 +0000 UTC m=+2186.181663362" Dec 03 21:14:54 crc kubenswrapper[4765]: I1203 21:14:54.802805 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:14:54 crc kubenswrapper[4765]: I1203 21:14:54.804661 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:14:54 crc kubenswrapper[4765]: I1203 21:14:54.804804 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:14:54 crc kubenswrapper[4765]: I1203 21:14:54.805692 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:14:54 crc kubenswrapper[4765]: I1203 21:14:54.805868 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b" gracePeriod=600 Dec 03 21:14:55 crc kubenswrapper[4765]: I1203 21:14:55.282044 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b" exitCode=0 Dec 03 21:14:55 crc kubenswrapper[4765]: I1203 21:14:55.282068 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b"} Dec 03 21:14:55 crc kubenswrapper[4765]: I1203 21:14:55.282330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad"} Dec 03 21:14:55 crc kubenswrapper[4765]: I1203 21:14:55.282354 4765 scope.go:117] "RemoveContainer" containerID="1b0ac22dd25eed7670c1e760421fd302d5818dfd0d479dd8c213b3c83b9547ae" Dec 03 21:14:58 crc kubenswrapper[4765]: I1203 21:14:58.314589 4765 generic.go:334] "Generic (PLEG): container finished" podID="d1f5d4f1-df58-457b-b56a-64c6cda175a4" containerID="7f50d9cfd6f22e2d023bb7a8807cffb0d910bd1a5ee0674e74b2000d1453c09d" exitCode=0 Dec 03 21:14:58 crc kubenswrapper[4765]: I1203 21:14:58.314675 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" event={"ID":"d1f5d4f1-df58-457b-b56a-64c6cda175a4","Type":"ContainerDied","Data":"7f50d9cfd6f22e2d023bb7a8807cffb0d910bd1a5ee0674e74b2000d1453c09d"} Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.834828 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.906509 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam\") pod \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.906809 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0\") pod \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.906861 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxn8\" (UniqueName: \"kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8\") pod \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.906907 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph\") pod \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\" (UID: \"d1f5d4f1-df58-457b-b56a-64c6cda175a4\") " Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.912211 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph" (OuterVolumeSpecName: "ceph") pod "d1f5d4f1-df58-457b-b56a-64c6cda175a4" (UID: "d1f5d4f1-df58-457b-b56a-64c6cda175a4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.913097 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8" (OuterVolumeSpecName: "kube-api-access-vtxn8") pod "d1f5d4f1-df58-457b-b56a-64c6cda175a4" (UID: "d1f5d4f1-df58-457b-b56a-64c6cda175a4"). InnerVolumeSpecName "kube-api-access-vtxn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.932468 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1f5d4f1-df58-457b-b56a-64c6cda175a4" (UID: "d1f5d4f1-df58-457b-b56a-64c6cda175a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:14:59 crc kubenswrapper[4765]: I1203 21:14:59.933235 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d1f5d4f1-df58-457b-b56a-64c6cda175a4" (UID: "d1f5d4f1-df58-457b-b56a-64c6cda175a4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.009094 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.009141 4765 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.009151 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxn8\" (UniqueName: \"kubernetes.io/projected/d1f5d4f1-df58-457b-b56a-64c6cda175a4-kube-api-access-vtxn8\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.009159 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1f5d4f1-df58-457b-b56a-64c6cda175a4-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.145007 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh"] Dec 03 21:15:00 crc kubenswrapper[4765]: E1203 21:15:00.145418 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f5d4f1-df58-457b-b56a-64c6cda175a4" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.145435 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f5d4f1-df58-457b-b56a-64c6cda175a4" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.145608 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f5d4f1-df58-457b-b56a-64c6cda175a4" containerName="ssh-known-hosts-edpm-deployment" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.146121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.151616 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.155284 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.168395 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh"] Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.212945 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jx5\" (UniqueName: \"kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.213046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.213123 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.314883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.315011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jx5\" (UniqueName: \"kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.315075 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.315761 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.320049 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.331478 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jx5\" (UniqueName: \"kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5\") pod \"collect-profiles-29413275-fr5kh\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.340231 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" event={"ID":"d1f5d4f1-df58-457b-b56a-64c6cda175a4","Type":"ContainerDied","Data":"b937e156d306f93fe67671399741aedf5ae9d6b960db049802651642c360c8d1"} Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.340277 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b937e156d306f93fe67671399741aedf5ae9d6b960db049802651642c360c8d1" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.340286 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sxk7z" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.400257 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd"] Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.401661 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.404589 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.404636 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.404663 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.404963 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.405219 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.413292 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd"] Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.488763 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.518528 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.518947 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.519013 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.519045 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfbv\" (UniqueName: \"kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.621550 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.621639 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.621678 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfbv\" (UniqueName: \"kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.621777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.627612 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.628946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.636091 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.639980 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfbv\" (UniqueName: \"kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hg7nd\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.716967 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:00 crc kubenswrapper[4765]: I1203 21:15:00.942332 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh"] Dec 03 21:15:01 crc kubenswrapper[4765]: I1203 21:15:01.265352 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd"] Dec 03 21:15:01 crc kubenswrapper[4765]: W1203 21:15:01.276553 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod743d2875_36d7_427b_af2e_c8a8e8d5a81c.slice/crio-25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942 WatchSource:0}: Error finding container 25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942: Status 404 returned error can't find the container with id 25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942 Dec 03 21:15:01 crc kubenswrapper[4765]: I1203 21:15:01.349076 4765 generic.go:334] "Generic (PLEG): container finished" podID="df4d7464-27fb-40ef-95f1-7131fc9264e7" containerID="1bdfd148ffd36cfdabb9c01d80a2820d05f70e97f3e17eab2353a2c819e64712" exitCode=0 Dec 03 21:15:01 crc kubenswrapper[4765]: I1203 21:15:01.349145 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" event={"ID":"df4d7464-27fb-40ef-95f1-7131fc9264e7","Type":"ContainerDied","Data":"1bdfd148ffd36cfdabb9c01d80a2820d05f70e97f3e17eab2353a2c819e64712"} Dec 03 21:15:01 crc kubenswrapper[4765]: I1203 21:15:01.349220 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" event={"ID":"df4d7464-27fb-40ef-95f1-7131fc9264e7","Type":"ContainerStarted","Data":"f3595e669d867490fae7929bc8960df28c9aa446f19c69e7a72828a41ba5d885"} Dec 03 21:15:01 crc kubenswrapper[4765]: I1203 21:15:01.350975 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" event={"ID":"743d2875-36d7-427b-af2e-c8a8e8d5a81c","Type":"ContainerStarted","Data":"25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942"} Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.396568 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" event={"ID":"743d2875-36d7-427b-af2e-c8a8e8d5a81c","Type":"ContainerStarted","Data":"4beab9e112ddcd93ceb7fd36158c328120ffff45dbced926c725f4eb09216165"} Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.436159 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" podStartSLOduration=1.981746857 podStartE2EDuration="2.43613474s" podCreationTimestamp="2025-12-03 21:15:00 +0000 UTC" firstStartedPulling="2025-12-03 21:15:01.278566795 +0000 UTC m=+2199.209111946" lastFinishedPulling="2025-12-03 21:15:01.732954648 +0000 UTC m=+2199.663499829" observedRunningTime="2025-12-03 21:15:02.423146845 +0000 UTC m=+2200.353692036" watchObservedRunningTime="2025-12-03 21:15:02.43613474 +0000 UTC m=+2200.366679901" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.792540 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.864782 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5jx5\" (UniqueName: \"kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5\") pod \"df4d7464-27fb-40ef-95f1-7131fc9264e7\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.864861 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume\") pod \"df4d7464-27fb-40ef-95f1-7131fc9264e7\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.864902 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume\") pod \"df4d7464-27fb-40ef-95f1-7131fc9264e7\" (UID: \"df4d7464-27fb-40ef-95f1-7131fc9264e7\") " Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.865854 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "df4d7464-27fb-40ef-95f1-7131fc9264e7" (UID: "df4d7464-27fb-40ef-95f1-7131fc9264e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.870662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df4d7464-27fb-40ef-95f1-7131fc9264e7" (UID: "df4d7464-27fb-40ef-95f1-7131fc9264e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.871557 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5" (OuterVolumeSpecName: "kube-api-access-w5jx5") pod "df4d7464-27fb-40ef-95f1-7131fc9264e7" (UID: "df4d7464-27fb-40ef-95f1-7131fc9264e7"). InnerVolumeSpecName "kube-api-access-w5jx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.966775 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4d7464-27fb-40ef-95f1-7131fc9264e7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.966814 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df4d7464-27fb-40ef-95f1-7131fc9264e7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:02 crc kubenswrapper[4765]: I1203 21:15:02.966827 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5jx5\" (UniqueName: \"kubernetes.io/projected/df4d7464-27fb-40ef-95f1-7131fc9264e7-kube-api-access-w5jx5\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:03 crc kubenswrapper[4765]: I1203 21:15:03.375947 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" Dec 03 21:15:03 crc kubenswrapper[4765]: I1203 21:15:03.375943 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413275-fr5kh" event={"ID":"df4d7464-27fb-40ef-95f1-7131fc9264e7","Type":"ContainerDied","Data":"f3595e669d867490fae7929bc8960df28c9aa446f19c69e7a72828a41ba5d885"} Dec 03 21:15:03 crc kubenswrapper[4765]: I1203 21:15:03.376102 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3595e669d867490fae7929bc8960df28c9aa446f19c69e7a72828a41ba5d885" Dec 03 21:15:03 crc kubenswrapper[4765]: I1203 21:15:03.874816 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs"] Dec 03 21:15:03 crc kubenswrapper[4765]: I1203 21:15:03.883433 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413230-bdsvs"] Dec 03 21:15:04 crc kubenswrapper[4765]: I1203 21:15:04.384700 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d367ea-4210-4883-96bc-54987e5f6f7a" path="/var/lib/kubelet/pods/c6d367ea-4210-4883-96bc-54987e5f6f7a/volumes" Dec 03 21:15:10 crc kubenswrapper[4765]: I1203 21:15:10.451851 4765 generic.go:334] "Generic (PLEG): container finished" podID="743d2875-36d7-427b-af2e-c8a8e8d5a81c" containerID="4beab9e112ddcd93ceb7fd36158c328120ffff45dbced926c725f4eb09216165" exitCode=0 Dec 03 21:15:10 crc kubenswrapper[4765]: I1203 21:15:10.451902 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" event={"ID":"743d2875-36d7-427b-af2e-c8a8e8d5a81c","Type":"ContainerDied","Data":"4beab9e112ddcd93ceb7fd36158c328120ffff45dbced926c725f4eb09216165"} Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.861571 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.938595 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key\") pod \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.938744 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnfbv\" (UniqueName: \"kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv\") pod \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.938782 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph\") pod \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.938823 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory\") pod \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\" (UID: \"743d2875-36d7-427b-af2e-c8a8e8d5a81c\") " Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.944485 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv" (OuterVolumeSpecName: "kube-api-access-hnfbv") pod "743d2875-36d7-427b-af2e-c8a8e8d5a81c" (UID: "743d2875-36d7-427b-af2e-c8a8e8d5a81c"). InnerVolumeSpecName "kube-api-access-hnfbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.944739 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph" (OuterVolumeSpecName: "ceph") pod "743d2875-36d7-427b-af2e-c8a8e8d5a81c" (UID: "743d2875-36d7-427b-af2e-c8a8e8d5a81c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.963544 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "743d2875-36d7-427b-af2e-c8a8e8d5a81c" (UID: "743d2875-36d7-427b-af2e-c8a8e8d5a81c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:11 crc kubenswrapper[4765]: I1203 21:15:11.964388 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory" (OuterVolumeSpecName: "inventory") pod "743d2875-36d7-427b-af2e-c8a8e8d5a81c" (UID: "743d2875-36d7-427b-af2e-c8a8e8d5a81c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.042492 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.042556 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnfbv\" (UniqueName: \"kubernetes.io/projected/743d2875-36d7-427b-af2e-c8a8e8d5a81c-kube-api-access-hnfbv\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.042572 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.042586 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743d2875-36d7-427b-af2e-c8a8e8d5a81c-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.473797 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" event={"ID":"743d2875-36d7-427b-af2e-c8a8e8d5a81c","Type":"ContainerDied","Data":"25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942"} Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.473859 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a7185e26a829e9b74f5c190991579227a68ae643ba4c0d9cd773c7a3876942" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.473945 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hg7nd" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.553108 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk"] Dec 03 21:15:12 crc kubenswrapper[4765]: E1203 21:15:12.553583 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4d7464-27fb-40ef-95f1-7131fc9264e7" containerName="collect-profiles" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.553601 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4d7464-27fb-40ef-95f1-7131fc9264e7" containerName="collect-profiles" Dec 03 21:15:12 crc kubenswrapper[4765]: E1203 21:15:12.553616 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d2875-36d7-427b-af2e-c8a8e8d5a81c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.553623 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d2875-36d7-427b-af2e-c8a8e8d5a81c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.553942 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4d7464-27fb-40ef-95f1-7131fc9264e7" containerName="collect-profiles" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.553957 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="743d2875-36d7-427b-af2e-c8a8e8d5a81c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.554527 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.558827 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.558868 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.558827 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.559045 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.559199 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.574381 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk"] Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.653178 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.653854 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.654081 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.654355 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vct4l\" (UniqueName: \"kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.756477 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.757020 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct4l\" (UniqueName: \"kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.757078 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.757137 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.761794 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.761868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.762182 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.776876 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct4l\" (UniqueName: \"kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:12 crc kubenswrapper[4765]: I1203 21:15:12.874045 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:13 crc kubenswrapper[4765]: I1203 21:15:13.936013 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk"] Dec 03 21:15:13 crc kubenswrapper[4765]: W1203 21:15:13.937888 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod405fb54f_da87_4598_8f88_b9cb64799a12.slice/crio-2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3 WatchSource:0}: Error finding container 2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3: Status 404 returned error can't find the container with id 2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3 Dec 03 21:15:14 crc kubenswrapper[4765]: I1203 21:15:14.860177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" event={"ID":"405fb54f-da87-4598-8f88-b9cb64799a12","Type":"ContainerStarted","Data":"651485ec262ff38d6f3e3962ab95a12301f84985ee2190d69ca68b0fe72f456b"} Dec 03 21:15:14 crc kubenswrapper[4765]: I1203 21:15:14.860510 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" event={"ID":"405fb54f-da87-4598-8f88-b9cb64799a12","Type":"ContainerStarted","Data":"2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3"} Dec 03 21:15:14 crc kubenswrapper[4765]: I1203 21:15:14.876477 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" podStartSLOduration=2.4634783860000002 podStartE2EDuration="2.876456812s" podCreationTimestamp="2025-12-03 21:15:12 +0000 UTC" firstStartedPulling="2025-12-03 21:15:13.939466358 +0000 UTC m=+2211.870011509" lastFinishedPulling="2025-12-03 21:15:14.352444784 +0000 UTC m=+2212.282989935" observedRunningTime="2025-12-03 21:15:14.873397955 +0000 UTC m=+2212.803943156" watchObservedRunningTime="2025-12-03 21:15:14.876456812 +0000 UTC m=+2212.807001963" Dec 03 21:15:24 crc kubenswrapper[4765]: I1203 21:15:24.966020 4765 generic.go:334] "Generic (PLEG): container finished" podID="405fb54f-da87-4598-8f88-b9cb64799a12" containerID="651485ec262ff38d6f3e3962ab95a12301f84985ee2190d69ca68b0fe72f456b" exitCode=0 Dec 03 21:15:24 crc kubenswrapper[4765]: I1203 21:15:24.966157 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" event={"ID":"405fb54f-da87-4598-8f88-b9cb64799a12","Type":"ContainerDied","Data":"651485ec262ff38d6f3e3962ab95a12301f84985ee2190d69ca68b0fe72f456b"} Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.497222 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.587959 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key\") pod \"405fb54f-da87-4598-8f88-b9cb64799a12\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.588027 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph\") pod \"405fb54f-da87-4598-8f88-b9cb64799a12\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.588162 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory\") pod \"405fb54f-da87-4598-8f88-b9cb64799a12\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.588407 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vct4l\" (UniqueName: \"kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l\") pod \"405fb54f-da87-4598-8f88-b9cb64799a12\" (UID: \"405fb54f-da87-4598-8f88-b9cb64799a12\") " Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.594482 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l" (OuterVolumeSpecName: "kube-api-access-vct4l") pod "405fb54f-da87-4598-8f88-b9cb64799a12" (UID: "405fb54f-da87-4598-8f88-b9cb64799a12"). InnerVolumeSpecName "kube-api-access-vct4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.595996 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph" (OuterVolumeSpecName: "ceph") pod "405fb54f-da87-4598-8f88-b9cb64799a12" (UID: "405fb54f-da87-4598-8f88-b9cb64799a12"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.618032 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "405fb54f-da87-4598-8f88-b9cb64799a12" (UID: "405fb54f-da87-4598-8f88-b9cb64799a12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.623809 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory" (OuterVolumeSpecName: "inventory") pod "405fb54f-da87-4598-8f88-b9cb64799a12" (UID: "405fb54f-da87-4598-8f88-b9cb64799a12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.690646 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vct4l\" (UniqueName: \"kubernetes.io/projected/405fb54f-da87-4598-8f88-b9cb64799a12-kube-api-access-vct4l\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.690944 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.690954 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.690962 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/405fb54f-da87-4598-8f88-b9cb64799a12-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.989550 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" event={"ID":"405fb54f-da87-4598-8f88-b9cb64799a12","Type":"ContainerDied","Data":"2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3"} Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.989593 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad5c274afdc86013afde21288d813cf65538987f49ded97b9de37ba87199fa3" Dec 03 21:15:26 crc kubenswrapper[4765]: I1203 21:15:26.989637 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.084615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r"] Dec 03 21:15:27 crc kubenswrapper[4765]: E1203 21:15:27.085185 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405fb54f-da87-4598-8f88-b9cb64799a12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.085263 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="405fb54f-da87-4598-8f88-b9cb64799a12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.085518 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="405fb54f-da87-4598-8f88-b9cb64799a12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.086135 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.088739 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.088776 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.089414 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.091660 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.092145 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.096089 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.096686 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.102258 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.116654 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r"] Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200231 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200291 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200448 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxsj\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200500 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200515 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200531 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200559 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200598 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200633 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200671 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.200728 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.310164 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.310521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.310642 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.310783 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.310921 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxsj\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311207 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311336 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311660 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311829 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.311949 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.312083 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.317250 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.317261 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.317290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.318388 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.318710 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.319777 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.325294 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxsj\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.329261 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.331605 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.332184 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.341606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.342040 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.342106 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.403743 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:15:27 crc kubenswrapper[4765]: I1203 21:15:27.998019 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r"] Dec 03 21:15:29 crc kubenswrapper[4765]: I1203 21:15:29.009443 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" event={"ID":"51f7c3b1-f566-4371-ad1d-487bbfa1be12","Type":"ContainerStarted","Data":"189193d2b7f48e76d9c987f6baedc0db7b20255a4c795a7a4dc32fff613a9803"} Dec 03 21:15:29 crc kubenswrapper[4765]: I1203 21:15:29.009799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" event={"ID":"51f7c3b1-f566-4371-ad1d-487bbfa1be12","Type":"ContainerStarted","Data":"69a4fe237d58d9cf5992944fb2d5a027568e7526d1f92be022033d60dab7f6c2"} Dec 03 21:15:29 crc kubenswrapper[4765]: I1203 21:15:29.044580 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" podStartSLOduration=1.576924375 podStartE2EDuration="2.044557726s" podCreationTimestamp="2025-12-03 21:15:27 +0000 UTC" firstStartedPulling="2025-12-03 21:15:28.003101226 +0000 UTC m=+2225.933646387" lastFinishedPulling="2025-12-03 21:15:28.470734587 +0000 UTC m=+2226.401279738" observedRunningTime="2025-12-03 21:15:29.038230688 +0000 UTC m=+2226.968775879" watchObservedRunningTime="2025-12-03 21:15:29.044557726 +0000 UTC m=+2226.975102897" Dec 03 21:15:30 crc kubenswrapper[4765]: I1203 21:15:30.264054 4765 scope.go:117] "RemoveContainer" containerID="424b6be4031807c74c3c047fb41f6a3646db42bce12e438e18006dec58df5441" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.503794 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.506837 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.518840 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.666997 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rs5\" (UniqueName: \"kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.667880 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.668049 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.708041 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.709779 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.722939 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771445 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771489 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9g2\" (UniqueName: \"kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771685 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771898 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rs5\" (UniqueName: \"kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.771974 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.772007 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.772062 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.795182 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rs5\" (UniqueName: \"kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5\") pod \"redhat-operators-pw4x2\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.831956 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.874068 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9g2\" (UniqueName: \"kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.874261 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.874368 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.874853 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.876521 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:37 crc kubenswrapper[4765]: I1203 21:15:37.890232 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9g2\" (UniqueName: \"kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2\") pod \"certified-operators-l9j49\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:38 crc kubenswrapper[4765]: I1203 21:15:38.037946 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:38 crc kubenswrapper[4765]: I1203 21:15:38.309271 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:15:38 crc kubenswrapper[4765]: I1203 21:15:38.550676 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:38 crc kubenswrapper[4765]: W1203 21:15:38.563137 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod583aa03c_a2b2_4a09_8790_a00b8d961e97.slice/crio-f39cccb70918e8f413b18a8d493fc4177289e2fa534a5e4dbcf45e50aa07e02f WatchSource:0}: Error finding container f39cccb70918e8f413b18a8d493fc4177289e2fa534a5e4dbcf45e50aa07e02f: Status 404 returned error can't find the container with id f39cccb70918e8f413b18a8d493fc4177289e2fa534a5e4dbcf45e50aa07e02f Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.155903 4765 generic.go:334] "Generic (PLEG): container finished" podID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerID="26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b" exitCode=0 Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.156056 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerDied","Data":"26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b"} Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.156522 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerStarted","Data":"f39cccb70918e8f413b18a8d493fc4177289e2fa534a5e4dbcf45e50aa07e02f"} Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.160978 4765 generic.go:334] "Generic (PLEG): container finished" podID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerID="eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455" exitCode=0 Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.161108 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerDied","Data":"eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455"} Dec 03 21:15:39 crc kubenswrapper[4765]: I1203 21:15:39.161465 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerStarted","Data":"2ebf334f69248cb805aee4f8d2965bb19d0c066c9ba908af5ab21398a14bdabd"} Dec 03 21:15:40 crc kubenswrapper[4765]: I1203 21:15:40.175291 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerStarted","Data":"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e"} Dec 03 21:15:40 crc kubenswrapper[4765]: I1203 21:15:40.186789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerStarted","Data":"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3"} Dec 03 21:15:41 crc kubenswrapper[4765]: I1203 21:15:41.199261 4765 generic.go:334] "Generic (PLEG): container finished" podID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerID="8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3" exitCode=0 Dec 03 21:15:41 crc kubenswrapper[4765]: I1203 21:15:41.199340 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerDied","Data":"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3"} Dec 03 21:15:43 crc kubenswrapper[4765]: I1203 21:15:43.227399 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerStarted","Data":"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9"} Dec 03 21:15:43 crc kubenswrapper[4765]: I1203 21:15:43.231722 4765 generic.go:334] "Generic (PLEG): container finished" podID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerID="d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e" exitCode=0 Dec 03 21:15:43 crc kubenswrapper[4765]: I1203 21:15:43.231798 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerDied","Data":"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e"} Dec 03 21:15:43 crc kubenswrapper[4765]: I1203 21:15:43.254383 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9j49" podStartSLOduration=3.085376076 podStartE2EDuration="6.254367247s" podCreationTimestamp="2025-12-03 21:15:37 +0000 UTC" firstStartedPulling="2025-12-03 21:15:39.15877303 +0000 UTC m=+2237.089318221" lastFinishedPulling="2025-12-03 21:15:42.327764241 +0000 UTC m=+2240.258309392" observedRunningTime="2025-12-03 21:15:43.246472293 +0000 UTC m=+2241.177017504" watchObservedRunningTime="2025-12-03 21:15:43.254367247 +0000 UTC m=+2241.184912398" Dec 03 21:15:46 crc kubenswrapper[4765]: I1203 21:15:46.262051 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerStarted","Data":"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8"} Dec 03 21:15:46 crc kubenswrapper[4765]: I1203 21:15:46.290540 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pw4x2" podStartSLOduration=3.346441895 podStartE2EDuration="9.290518588s" podCreationTimestamp="2025-12-03 21:15:37 +0000 UTC" firstStartedPulling="2025-12-03 21:15:39.163916163 +0000 UTC m=+2237.094461354" lastFinishedPulling="2025-12-03 21:15:45.107992886 +0000 UTC m=+2243.038538047" observedRunningTime="2025-12-03 21:15:46.283146526 +0000 UTC m=+2244.213691717" watchObservedRunningTime="2025-12-03 21:15:46.290518588 +0000 UTC m=+2244.221063749" Dec 03 21:15:47 crc kubenswrapper[4765]: I1203 21:15:47.832032 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:47 crc kubenswrapper[4765]: I1203 21:15:47.832375 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:48 crc kubenswrapper[4765]: I1203 21:15:48.039451 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:48 crc kubenswrapper[4765]: I1203 21:15:48.040161 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:48 crc kubenswrapper[4765]: I1203 21:15:48.102830 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:48 crc kubenswrapper[4765]: I1203 21:15:48.325791 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:48 crc kubenswrapper[4765]: I1203 21:15:48.879633 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pw4x2" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="registry-server" probeResult="failure" output=< Dec 03 21:15:48 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:15:48 crc kubenswrapper[4765]: > Dec 03 21:15:49 crc kubenswrapper[4765]: I1203 21:15:49.288137 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.298369 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9j49" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="registry-server" containerID="cri-o://97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9" gracePeriod=2 Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.746429 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.921445 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9g2\" (UniqueName: \"kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2\") pod \"583aa03c-a2b2-4a09-8790-a00b8d961e97\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.921588 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities\") pod \"583aa03c-a2b2-4a09-8790-a00b8d961e97\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.921616 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content\") pod \"583aa03c-a2b2-4a09-8790-a00b8d961e97\" (UID: \"583aa03c-a2b2-4a09-8790-a00b8d961e97\") " Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.924044 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities" (OuterVolumeSpecName: "utilities") pod "583aa03c-a2b2-4a09-8790-a00b8d961e97" (UID: "583aa03c-a2b2-4a09-8790-a00b8d961e97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.927728 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2" (OuterVolumeSpecName: "kube-api-access-zm9g2") pod "583aa03c-a2b2-4a09-8790-a00b8d961e97" (UID: "583aa03c-a2b2-4a09-8790-a00b8d961e97"). InnerVolumeSpecName "kube-api-access-zm9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:15:50 crc kubenswrapper[4765]: I1203 21:15:50.966505 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "583aa03c-a2b2-4a09-8790-a00b8d961e97" (UID: "583aa03c-a2b2-4a09-8790-a00b8d961e97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.024034 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9g2\" (UniqueName: \"kubernetes.io/projected/583aa03c-a2b2-4a09-8790-a00b8d961e97-kube-api-access-zm9g2\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.024086 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.024099 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/583aa03c-a2b2-4a09-8790-a00b8d961e97-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.308702 4765 generic.go:334] "Generic (PLEG): container finished" podID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerID="97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9" exitCode=0 Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.308747 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerDied","Data":"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9"} Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.308778 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9j49" event={"ID":"583aa03c-a2b2-4a09-8790-a00b8d961e97","Type":"ContainerDied","Data":"f39cccb70918e8f413b18a8d493fc4177289e2fa534a5e4dbcf45e50aa07e02f"} Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.308777 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9j49" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.308797 4765 scope.go:117] "RemoveContainer" containerID="97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.330690 4765 scope.go:117] "RemoveContainer" containerID="8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.340968 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.360043 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9j49"] Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.366119 4765 scope.go:117] "RemoveContainer" containerID="26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.385041 4765 scope.go:117] "RemoveContainer" containerID="97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9" Dec 03 21:15:51 crc kubenswrapper[4765]: E1203 21:15:51.385445 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9\": container with ID starting with 97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9 not found: ID does not exist" containerID="97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.385470 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9"} err="failed to get container status \"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9\": rpc error: code = NotFound desc = could not find container \"97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9\": container with ID starting with 97b2ff1f9f3b02c66dc418df05e312d61f95abb2110e54f9bd1faf8d74960cc9 not found: ID does not exist" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.385510 4765 scope.go:117] "RemoveContainer" containerID="8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3" Dec 03 21:15:51 crc kubenswrapper[4765]: E1203 21:15:51.386610 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3\": container with ID starting with 8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3 not found: ID does not exist" containerID="8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.386636 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3"} err="failed to get container status \"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3\": rpc error: code = NotFound desc = could not find container \"8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3\": container with ID starting with 8fcbd3258a7bd5ebdee85e101189c5e1710a8d76f1abeda92e53cb1c289756c3 not found: ID does not exist" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.386655 4765 scope.go:117] "RemoveContainer" containerID="26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b" Dec 03 21:15:51 crc kubenswrapper[4765]: E1203 21:15:51.387414 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b\": container with ID starting with 26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b not found: ID does not exist" containerID="26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b" Dec 03 21:15:51 crc kubenswrapper[4765]: I1203 21:15:51.387442 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b"} err="failed to get container status \"26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b\": rpc error: code = NotFound desc = could not find container \"26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b\": container with ID starting with 26e2c9a94900b6d55b8650462bd7f929ee0363a545699f8c4422a2736c6f571b not found: ID does not exist" Dec 03 21:15:52 crc kubenswrapper[4765]: I1203 21:15:52.376149 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" path="/var/lib/kubelet/pods/583aa03c-a2b2-4a09-8790-a00b8d961e97/volumes" Dec 03 21:15:57 crc kubenswrapper[4765]: I1203 21:15:57.906190 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:57 crc kubenswrapper[4765]: I1203 21:15:57.956208 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:58 crc kubenswrapper[4765]: I1203 21:15:58.144780 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.383333 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pw4x2" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="registry-server" containerID="cri-o://008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8" gracePeriod=2 Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.810947 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.989556 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content\") pod \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.989697 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities\") pod \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.989736 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5rs5\" (UniqueName: \"kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5\") pod \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\" (UID: \"c57b3ffb-46d1-4374-b343-e60fd4a22ef6\") " Dec 03 21:15:59 crc kubenswrapper[4765]: I1203 21:15:59.991221 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities" (OuterVolumeSpecName: "utilities") pod "c57b3ffb-46d1-4374-b343-e60fd4a22ef6" (UID: "c57b3ffb-46d1-4374-b343-e60fd4a22ef6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.002500 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5" (OuterVolumeSpecName: "kube-api-access-b5rs5") pod "c57b3ffb-46d1-4374-b343-e60fd4a22ef6" (UID: "c57b3ffb-46d1-4374-b343-e60fd4a22ef6"). InnerVolumeSpecName "kube-api-access-b5rs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.092445 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.092505 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5rs5\" (UniqueName: \"kubernetes.io/projected/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-kube-api-access-b5rs5\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.137917 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c57b3ffb-46d1-4374-b343-e60fd4a22ef6" (UID: "c57b3ffb-46d1-4374-b343-e60fd4a22ef6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.193979 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57b3ffb-46d1-4374-b343-e60fd4a22ef6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.403169 4765 generic.go:334] "Generic (PLEG): container finished" podID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerID="008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8" exitCode=0 Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.403268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerDied","Data":"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8"} Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.403363 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4x2" event={"ID":"c57b3ffb-46d1-4374-b343-e60fd4a22ef6","Type":"ContainerDied","Data":"2ebf334f69248cb805aee4f8d2965bb19d0c066c9ba908af5ab21398a14bdabd"} Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.403401 4765 scope.go:117] "RemoveContainer" containerID="008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.403709 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4x2" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.452784 4765 scope.go:117] "RemoveContainer" containerID="d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.463442 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.473651 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pw4x2"] Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.479483 4765 scope.go:117] "RemoveContainer" containerID="eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.533888 4765 scope.go:117] "RemoveContainer" containerID="008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8" Dec 03 21:16:00 crc kubenswrapper[4765]: E1203 21:16:00.534536 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8\": container with ID starting with 008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8 not found: ID does not exist" containerID="008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.534575 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8"} err="failed to get container status \"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8\": rpc error: code = NotFound desc = could not find container \"008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8\": container with ID starting with 008cee216da7d9d64ad413719602b3277170fd283abbe239b34ef717bb684ec8 not found: ID does not exist" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.534603 4765 scope.go:117] "RemoveContainer" containerID="d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e" Dec 03 21:16:00 crc kubenswrapper[4765]: E1203 21:16:00.534992 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e\": container with ID starting with d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e not found: ID does not exist" containerID="d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.535022 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e"} err="failed to get container status \"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e\": rpc error: code = NotFound desc = could not find container \"d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e\": container with ID starting with d81af7c4e2cef6f87c0d8d69f5192cf87e22e1c0cfdce68327f48ecce7f4458e not found: ID does not exist" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.535039 4765 scope.go:117] "RemoveContainer" containerID="eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455" Dec 03 21:16:00 crc kubenswrapper[4765]: E1203 21:16:00.535456 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455\": container with ID starting with eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455 not found: ID does not exist" containerID="eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455" Dec 03 21:16:00 crc kubenswrapper[4765]: I1203 21:16:00.535489 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455"} err="failed to get container status \"eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455\": rpc error: code = NotFound desc = could not find container \"eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455\": container with ID starting with eb2860da553f2fa59262476bfb1d525b39324e7cfcc8f3b66b85e31a93dc3455 not found: ID does not exist" Dec 03 21:16:02 crc kubenswrapper[4765]: I1203 21:16:02.381591 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" path="/var/lib/kubelet/pods/c57b3ffb-46d1-4374-b343-e60fd4a22ef6/volumes" Dec 03 21:16:03 crc kubenswrapper[4765]: I1203 21:16:03.447211 4765 generic.go:334] "Generic (PLEG): container finished" podID="51f7c3b1-f566-4371-ad1d-487bbfa1be12" containerID="189193d2b7f48e76d9c987f6baedc0db7b20255a4c795a7a4dc32fff613a9803" exitCode=0 Dec 03 21:16:03 crc kubenswrapper[4765]: I1203 21:16:03.447271 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" event={"ID":"51f7c3b1-f566-4371-ad1d-487bbfa1be12","Type":"ContainerDied","Data":"189193d2b7f48e76d9c987f6baedc0db7b20255a4c795a7a4dc32fff613a9803"} Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.836690 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.980943 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981374 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981412 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981457 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981482 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981510 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981548 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981583 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981608 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981638 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981668 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981695 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcxsj\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.981726 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle\") pod \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\" (UID: \"51f7c3b1-f566-4371-ad1d-487bbfa1be12\") " Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.988820 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.988842 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph" (OuterVolumeSpecName: "ceph") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.988914 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.988950 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj" (OuterVolumeSpecName: "kube-api-access-jcxsj") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "kube-api-access-jcxsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.988968 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.989557 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.991713 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:04 crc kubenswrapper[4765]: I1203 21:16:04.995624 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.000758 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.000986 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.001313 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.011395 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory" (OuterVolumeSpecName: "inventory") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.026891 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "51f7c3b1-f566-4371-ad1d-487bbfa1be12" (UID: "51f7c3b1-f566-4371-ad1d-487bbfa1be12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083080 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083112 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083121 4765 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083131 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083141 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083150 4765 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083158 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083169 4765 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083178 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083187 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083195 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083224 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcxsj\" (UniqueName: \"kubernetes.io/projected/51f7c3b1-f566-4371-ad1d-487bbfa1be12-kube-api-access-jcxsj\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.083234 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f7c3b1-f566-4371-ad1d-487bbfa1be12-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.506135 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" event={"ID":"51f7c3b1-f566-4371-ad1d-487bbfa1be12","Type":"ContainerDied","Data":"69a4fe237d58d9cf5992944fb2d5a027568e7526d1f92be022033d60dab7f6c2"} Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.506243 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a4fe237d58d9cf5992944fb2d5a027568e7526d1f92be022033d60dab7f6c2" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.506347 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614342 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx"] Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614762 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="extract-utilities" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614787 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="extract-utilities" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614816 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614825 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614839 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="extract-content" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614850 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="extract-content" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614867 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="extract-content" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614876 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="extract-content" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614887 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614895 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614912 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f7c3b1-f566-4371-ad1d-487bbfa1be12" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614921 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f7c3b1-f566-4371-ad1d-487bbfa1be12" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:05 crc kubenswrapper[4765]: E1203 21:16:05.614934 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="extract-utilities" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.614944 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="extract-utilities" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.615174 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f7c3b1-f566-4371-ad1d-487bbfa1be12" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.615200 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="583aa03c-a2b2-4a09-8790-a00b8d961e97" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.615226 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57b3ffb-46d1-4374-b343-e60fd4a22ef6" containerName="registry-server" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.615959 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.623355 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.623653 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.623783 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.623880 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.624077 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.630012 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx"] Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.698252 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.698330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.698530 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppx95\" (UniqueName: \"kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.698581 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.799982 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppx95\" (UniqueName: \"kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.800064 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.800135 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.800160 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.807442 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.807616 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.809363 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.825570 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppx95\" (UniqueName: \"kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:05 crc kubenswrapper[4765]: I1203 21:16:05.985556 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:06 crc kubenswrapper[4765]: W1203 21:16:06.370989 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b77ee4_d4b7_48a2_993b_c7e911e88b0d.slice/crio-83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214 WatchSource:0}: Error finding container 83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214: Status 404 returned error can't find the container with id 83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214 Dec 03 21:16:06 crc kubenswrapper[4765]: I1203 21:16:06.375213 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx"] Dec 03 21:16:06 crc kubenswrapper[4765]: I1203 21:16:06.516884 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" event={"ID":"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d","Type":"ContainerStarted","Data":"83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214"} Dec 03 21:16:07 crc kubenswrapper[4765]: I1203 21:16:07.529256 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" event={"ID":"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d","Type":"ContainerStarted","Data":"ee3708d75d440b93cd01c449fefe8a29d1de14387f4e421be16dccaa5abf4c59"} Dec 03 21:16:07 crc kubenswrapper[4765]: I1203 21:16:07.556878 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" podStartSLOduration=2.126201622 podStartE2EDuration="2.556860145s" podCreationTimestamp="2025-12-03 21:16:05 +0000 UTC" firstStartedPulling="2025-12-03 21:16:06.374499967 +0000 UTC m=+2264.305045128" lastFinishedPulling="2025-12-03 21:16:06.80515845 +0000 UTC m=+2264.735703651" observedRunningTime="2025-12-03 21:16:07.545086257 +0000 UTC m=+2265.475631458" watchObservedRunningTime="2025-12-03 21:16:07.556860145 +0000 UTC m=+2265.487405296" Dec 03 21:16:13 crc kubenswrapper[4765]: I1203 21:16:13.595879 4765 generic.go:334] "Generic (PLEG): container finished" podID="d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" containerID="ee3708d75d440b93cd01c449fefe8a29d1de14387f4e421be16dccaa5abf4c59" exitCode=0 Dec 03 21:16:13 crc kubenswrapper[4765]: I1203 21:16:13.595970 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" event={"ID":"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d","Type":"ContainerDied","Data":"ee3708d75d440b93cd01c449fefe8a29d1de14387f4e421be16dccaa5abf4c59"} Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.052125 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.224719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph\") pod \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.224794 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory\") pod \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.224910 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key\") pod \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.225003 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppx95\" (UniqueName: \"kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95\") pod \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\" (UID: \"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d\") " Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.238564 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph" (OuterVolumeSpecName: "ceph") pod "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" (UID: "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.238576 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95" (OuterVolumeSpecName: "kube-api-access-ppx95") pod "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" (UID: "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d"). InnerVolumeSpecName "kube-api-access-ppx95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.257345 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory" (OuterVolumeSpecName: "inventory") pod "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" (UID: "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.261314 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" (UID: "d5b77ee4-d4b7-48a2-993b-c7e911e88b0d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.327554 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppx95\" (UniqueName: \"kubernetes.io/projected/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-kube-api-access-ppx95\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.327602 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.327612 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.327624 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5b77ee4-d4b7-48a2-993b-c7e911e88b0d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.617131 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" event={"ID":"d5b77ee4-d4b7-48a2-993b-c7e911e88b0d","Type":"ContainerDied","Data":"83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214"} Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.617203 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83cc70bd5a5ac87ec2c91fd9772bcd77640876e03f63b7942b01f6502747c214" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.617157 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.723844 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq"] Dec 03 21:16:15 crc kubenswrapper[4765]: E1203 21:16:15.724204 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.724218 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.724415 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b77ee4-d4b7-48a2-993b-c7e911e88b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.724991 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.729400 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.729597 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.729750 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.730775 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.731167 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.732360 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.750652 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq"] Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839086 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839203 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839269 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gj9k\" (UniqueName: \"kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839397 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839501 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.839709 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941223 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941293 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gj9k\" (UniqueName: \"kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941368 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941410 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.941732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.944146 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.947608 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.950238 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.953644 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.961004 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:15 crc kubenswrapper[4765]: I1203 21:16:15.972414 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gj9k\" (UniqueName: \"kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbhcq\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:16 crc kubenswrapper[4765]: I1203 21:16:16.062346 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:16:16 crc kubenswrapper[4765]: I1203 21:16:16.639977 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq"] Dec 03 21:16:17 crc kubenswrapper[4765]: I1203 21:16:17.658013 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" event={"ID":"acf5a824-dd5c-412f-a7b2-848352ec8eaa","Type":"ContainerStarted","Data":"60afa58c64a0ef2aab265fcf13de4e519f3cc03d5635635111a28e4cbeeb8c26"} Dec 03 21:16:17 crc kubenswrapper[4765]: I1203 21:16:17.658319 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" event={"ID":"acf5a824-dd5c-412f-a7b2-848352ec8eaa","Type":"ContainerStarted","Data":"645148ea0d3bf1bda2f21c099c54c20fde1f1434edddae42e6e4feae50de1ca4"} Dec 03 21:16:17 crc kubenswrapper[4765]: I1203 21:16:17.686788 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" podStartSLOduration=2.246456629 podStartE2EDuration="2.686768933s" podCreationTimestamp="2025-12-03 21:16:15 +0000 UTC" firstStartedPulling="2025-12-03 21:16:16.642641825 +0000 UTC m=+2274.573186986" lastFinishedPulling="2025-12-03 21:16:17.082954089 +0000 UTC m=+2275.013499290" observedRunningTime="2025-12-03 21:16:17.682099576 +0000 UTC m=+2275.612644727" watchObservedRunningTime="2025-12-03 21:16:17.686768933 +0000 UTC m=+2275.617314084" Dec 03 21:17:24 crc kubenswrapper[4765]: I1203 21:17:24.798219 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:17:24 crc kubenswrapper[4765]: I1203 21:17:24.799048 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:17:34 crc kubenswrapper[4765]: I1203 21:17:34.462850 4765 generic.go:334] "Generic (PLEG): container finished" podID="acf5a824-dd5c-412f-a7b2-848352ec8eaa" containerID="60afa58c64a0ef2aab265fcf13de4e519f3cc03d5635635111a28e4cbeeb8c26" exitCode=0 Dec 03 21:17:34 crc kubenswrapper[4765]: I1203 21:17:34.462940 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" event={"ID":"acf5a824-dd5c-412f-a7b2-848352ec8eaa","Type":"ContainerDied","Data":"60afa58c64a0ef2aab265fcf13de4e519f3cc03d5635635111a28e4cbeeb8c26"} Dec 03 21:17:35 crc kubenswrapper[4765]: I1203 21:17:35.973997 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.102876 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.103004 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.103066 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gj9k\" (UniqueName: \"kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.103129 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.103171 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.103264 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory\") pod \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\" (UID: \"acf5a824-dd5c-412f-a7b2-848352ec8eaa\") " Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.109390 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph" (OuterVolumeSpecName: "ceph") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.109700 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.115661 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k" (OuterVolumeSpecName: "kube-api-access-2gj9k") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "kube-api-access-2gj9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.143923 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.145015 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory" (OuterVolumeSpecName: "inventory") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.153108 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acf5a824-dd5c-412f-a7b2-848352ec8eaa" (UID: "acf5a824-dd5c-412f-a7b2-848352ec8eaa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204849 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204884 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gj9k\" (UniqueName: \"kubernetes.io/projected/acf5a824-dd5c-412f-a7b2-848352ec8eaa-kube-api-access-2gj9k\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204896 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204904 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204912 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf5a824-dd5c-412f-a7b2-848352ec8eaa-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.204921 4765 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/acf5a824-dd5c-412f-a7b2-848352ec8eaa-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.483591 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" event={"ID":"acf5a824-dd5c-412f-a7b2-848352ec8eaa","Type":"ContainerDied","Data":"645148ea0d3bf1bda2f21c099c54c20fde1f1434edddae42e6e4feae50de1ca4"} Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.483652 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645148ea0d3bf1bda2f21c099c54c20fde1f1434edddae42e6e4feae50de1ca4" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.483869 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbhcq" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.679084 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7"] Dec 03 21:17:36 crc kubenswrapper[4765]: E1203 21:17:36.679887 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf5a824-dd5c-412f-a7b2-848352ec8eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.680015 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf5a824-dd5c-412f-a7b2-848352ec8eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.680433 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf5a824-dd5c-412f-a7b2-848352ec8eaa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.683273 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.687692 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.688107 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.688107 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.689563 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.689809 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.690192 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.690572 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.696082 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7"] Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821480 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821555 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzpl\" (UniqueName: \"kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821596 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821767 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821799 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821847 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.821912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923433 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923513 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923547 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923580 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhzpl\" (UniqueName: \"kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923776 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.923842 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.929352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.929668 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.929840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.930165 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.931719 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.937509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:36 crc kubenswrapper[4765]: I1203 21:17:36.941845 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhzpl\" (UniqueName: \"kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:37 crc kubenswrapper[4765]: I1203 21:17:37.004706 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:17:37 crc kubenswrapper[4765]: I1203 21:17:37.439424 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7"] Dec 03 21:17:37 crc kubenswrapper[4765]: W1203 21:17:37.458258 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360b19b3_c391_467e_ab4c_f7cb150873ea.slice/crio-f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45 WatchSource:0}: Error finding container f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45: Status 404 returned error can't find the container with id f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45 Dec 03 21:17:37 crc kubenswrapper[4765]: I1203 21:17:37.494180 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" event={"ID":"360b19b3-c391-467e-ab4c-f7cb150873ea","Type":"ContainerStarted","Data":"f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45"} Dec 03 21:17:38 crc kubenswrapper[4765]: I1203 21:17:38.503257 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" event={"ID":"360b19b3-c391-467e-ab4c-f7cb150873ea","Type":"ContainerStarted","Data":"93fa8e4613a220782f3ae041b71a57ac947c47c9887e04f260b57f463ebc8c42"} Dec 03 21:17:54 crc kubenswrapper[4765]: I1203 21:17:54.798209 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:17:54 crc kubenswrapper[4765]: I1203 21:17:54.799477 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:18:15 crc kubenswrapper[4765]: I1203 21:18:15.391062 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-rtzp2" podUID="d3648e48-1afd-42ec-9aec-4d91958639b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 21:18:24 crc kubenswrapper[4765]: I1203 21:18:24.798410 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:18:24 crc kubenswrapper[4765]: I1203 21:18:24.799070 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:18:24 crc kubenswrapper[4765]: I1203 21:18:24.799134 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:18:24 crc kubenswrapper[4765]: I1203 21:18:24.800180 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:18:24 crc kubenswrapper[4765]: I1203 21:18:24.800273 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" gracePeriod=600 Dec 03 21:18:24 crc kubenswrapper[4765]: E1203 21:18:24.943488 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:18:25 crc kubenswrapper[4765]: I1203 21:18:25.531905 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" exitCode=0 Dec 03 21:18:25 crc kubenswrapper[4765]: I1203 21:18:25.531987 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad"} Dec 03 21:18:25 crc kubenswrapper[4765]: I1203 21:18:25.532329 4765 scope.go:117] "RemoveContainer" containerID="2922e087a1232fd3024eb1a7fa81c56ddda8193852e4c595b9e4df95b134f51b" Dec 03 21:18:25 crc kubenswrapper[4765]: I1203 21:18:25.533049 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:18:25 crc kubenswrapper[4765]: E1203 21:18:25.533459 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:18:25 crc kubenswrapper[4765]: I1203 21:18:25.586844 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" podStartSLOduration=49.042384704 podStartE2EDuration="49.586788844s" podCreationTimestamp="2025-12-03 21:17:36 +0000 UTC" firstStartedPulling="2025-12-03 21:17:37.463268419 +0000 UTC m=+2355.393813580" lastFinishedPulling="2025-12-03 21:17:38.007672569 +0000 UTC m=+2355.938217720" observedRunningTime="2025-12-03 21:17:38.524582955 +0000 UTC m=+2356.455128116" watchObservedRunningTime="2025-12-03 21:18:25.586788844 +0000 UTC m=+2403.517334025" Dec 03 21:18:38 crc kubenswrapper[4765]: I1203 21:18:38.360599 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:18:38 crc kubenswrapper[4765]: E1203 21:18:38.361712 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:18:48 crc kubenswrapper[4765]: I1203 21:18:48.810178 4765 generic.go:334] "Generic (PLEG): container finished" podID="360b19b3-c391-467e-ab4c-f7cb150873ea" containerID="93fa8e4613a220782f3ae041b71a57ac947c47c9887e04f260b57f463ebc8c42" exitCode=0 Dec 03 21:18:48 crc kubenswrapper[4765]: I1203 21:18:48.810344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" event={"ID":"360b19b3-c391-467e-ab4c-f7cb150873ea","Type":"ContainerDied","Data":"93fa8e4613a220782f3ae041b71a57ac947c47c9887e04f260b57f463ebc8c42"} Dec 03 21:18:49 crc kubenswrapper[4765]: I1203 21:18:49.361001 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:18:49 crc kubenswrapper[4765]: E1203 21:18:49.361374 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.247236 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342693 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342771 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342822 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342851 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhzpl\" (UniqueName: \"kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342887 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342928 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.342968 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory\") pod \"360b19b3-c391-467e-ab4c-f7cb150873ea\" (UID: \"360b19b3-c391-467e-ab4c-f7cb150873ea\") " Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.349148 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl" (OuterVolumeSpecName: "kube-api-access-nhzpl") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "kube-api-access-nhzpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.349487 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph" (OuterVolumeSpecName: "ceph") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.351127 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.376976 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.383400 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.390905 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.395554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory" (OuterVolumeSpecName: "inventory") pod "360b19b3-c391-467e-ab4c-f7cb150873ea" (UID: "360b19b3-c391-467e-ab4c-f7cb150873ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445616 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445665 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445675 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445688 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445697 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445706 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhzpl\" (UniqueName: \"kubernetes.io/projected/360b19b3-c391-467e-ab4c-f7cb150873ea-kube-api-access-nhzpl\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.445715 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/360b19b3-c391-467e-ab4c-f7cb150873ea-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.837789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" event={"ID":"360b19b3-c391-467e-ab4c-f7cb150873ea","Type":"ContainerDied","Data":"f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45"} Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.838330 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2daf428ac78339f293c8a3ff82afd2ce19a6f9e314b8dee5426b90ed7f49b45" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.837897 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.986746 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw"] Dec 03 21:18:50 crc kubenswrapper[4765]: E1203 21:18:50.987652 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360b19b3-c391-467e-ab4c-f7cb150873ea" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.987689 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="360b19b3-c391-467e-ab4c-f7cb150873ea" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.988054 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="360b19b3-c391-467e-ab4c-f7cb150873ea" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.989214 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.992193 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.992214 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.993784 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.993990 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.994006 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.994245 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:18:50 crc kubenswrapper[4765]: I1203 21:18:50.996936 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw"] Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158417 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158520 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8p7\" (UniqueName: \"kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158574 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158648 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.158837 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.260753 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.260828 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.260864 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8p7\" (UniqueName: \"kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.260943 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.261040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.261091 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.266225 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.266599 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.267654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.271122 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.272159 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.294415 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8p7\" (UniqueName: \"kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.348982 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:18:51 crc kubenswrapper[4765]: I1203 21:18:51.980591 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw"] Dec 03 21:18:52 crc kubenswrapper[4765]: I1203 21:18:52.870724 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" event={"ID":"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96","Type":"ContainerStarted","Data":"623c8e81c671954f3d1d5f34ff7cb5c80e31a3fca4d037b8493ae808d6a277a2"} Dec 03 21:18:52 crc kubenswrapper[4765]: I1203 21:18:52.871149 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" event={"ID":"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96","Type":"ContainerStarted","Data":"e5d81fa59c068bfd3b2609d9b18b43a0f2e6b429ec0648f1b18fc4bccbfd98aa"} Dec 03 21:18:52 crc kubenswrapper[4765]: I1203 21:18:52.914345 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" podStartSLOduration=2.487425321 podStartE2EDuration="2.914294513s" podCreationTimestamp="2025-12-03 21:18:50 +0000 UTC" firstStartedPulling="2025-12-03 21:18:51.970115713 +0000 UTC m=+2429.900660864" lastFinishedPulling="2025-12-03 21:18:52.396984885 +0000 UTC m=+2430.327530056" observedRunningTime="2025-12-03 21:18:52.898241078 +0000 UTC m=+2430.828786249" watchObservedRunningTime="2025-12-03 21:18:52.914294513 +0000 UTC m=+2430.844839704" Dec 03 21:19:04 crc kubenswrapper[4765]: I1203 21:19:04.360368 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:19:04 crc kubenswrapper[4765]: E1203 21:19:04.361401 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:19:18 crc kubenswrapper[4765]: I1203 21:19:18.359905 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:19:18 crc kubenswrapper[4765]: E1203 21:19:18.360755 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:19:33 crc kubenswrapper[4765]: I1203 21:19:33.360175 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:19:33 crc kubenswrapper[4765]: E1203 21:19:33.361241 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:19:46 crc kubenswrapper[4765]: I1203 21:19:46.360402 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:19:46 crc kubenswrapper[4765]: E1203 21:19:46.361475 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:19:57 crc kubenswrapper[4765]: I1203 21:19:57.359773 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:19:57 crc kubenswrapper[4765]: E1203 21:19:57.360531 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:20:12 crc kubenswrapper[4765]: I1203 21:20:12.393258 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:20:12 crc kubenswrapper[4765]: E1203 21:20:12.398551 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:20:24 crc kubenswrapper[4765]: I1203 21:20:24.360684 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:20:24 crc kubenswrapper[4765]: E1203 21:20:24.361842 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:20:38 crc kubenswrapper[4765]: I1203 21:20:38.360732 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:20:38 crc kubenswrapper[4765]: E1203 21:20:38.361734 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:20:49 crc kubenswrapper[4765]: I1203 21:20:49.360488 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:20:49 crc kubenswrapper[4765]: E1203 21:20:49.361410 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:21:04 crc kubenswrapper[4765]: I1203 21:21:04.360275 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:21:04 crc kubenswrapper[4765]: E1203 21:21:04.361267 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:21:17 crc kubenswrapper[4765]: I1203 21:21:17.361188 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:21:17 crc kubenswrapper[4765]: E1203 21:21:17.362265 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:21:28 crc kubenswrapper[4765]: I1203 21:21:28.360264 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:21:28 crc kubenswrapper[4765]: E1203 21:21:28.361289 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:21:41 crc kubenswrapper[4765]: I1203 21:21:41.360702 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:21:41 crc kubenswrapper[4765]: E1203 21:21:41.362850 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:21:52 crc kubenswrapper[4765]: I1203 21:21:52.373129 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:21:52 crc kubenswrapper[4765]: E1203 21:21:52.374283 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:22:05 crc kubenswrapper[4765]: I1203 21:22:05.360157 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:22:05 crc kubenswrapper[4765]: E1203 21:22:05.361659 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:22:20 crc kubenswrapper[4765]: I1203 21:22:20.360395 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:22:20 crc kubenswrapper[4765]: E1203 21:22:20.361362 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.418934 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.427231 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.431677 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.431763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmqt\" (UniqueName: \"kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.431785 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.435824 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.532975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.533041 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmqt\" (UniqueName: \"kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.533064 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.533544 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.533585 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.564831 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmqt\" (UniqueName: \"kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt\") pod \"community-operators-qjrsr\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:24 crc kubenswrapper[4765]: I1203 21:22:24.758725 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:25 crc kubenswrapper[4765]: I1203 21:22:25.248180 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:25 crc kubenswrapper[4765]: W1203 21:22:25.256655 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod929c4c33_0001_409b_a278_6b07996300c1.slice/crio-f892922bf5dac8bf1a893a843ab44f16e12abc0e5ce8a86209facfae40913d5e WatchSource:0}: Error finding container f892922bf5dac8bf1a893a843ab44f16e12abc0e5ce8a86209facfae40913d5e: Status 404 returned error can't find the container with id f892922bf5dac8bf1a893a843ab44f16e12abc0e5ce8a86209facfae40913d5e Dec 03 21:22:25 crc kubenswrapper[4765]: I1203 21:22:25.294042 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerStarted","Data":"f892922bf5dac8bf1a893a843ab44f16e12abc0e5ce8a86209facfae40913d5e"} Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.306102 4765 generic.go:334] "Generic (PLEG): container finished" podID="929c4c33-0001-409b-a278-6b07996300c1" containerID="b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932" exitCode=0 Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.306151 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerDied","Data":"b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932"} Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.309435 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.784488 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.787747 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.798488 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.873866 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cfvq\" (UniqueName: \"kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.873957 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.874103 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.975613 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cfvq\" (UniqueName: \"kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.975667 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.975743 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.976173 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.976444 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:26 crc kubenswrapper[4765]: I1203 21:22:26.997197 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cfvq\" (UniqueName: \"kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq\") pod \"redhat-marketplace-bkc4k\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:27 crc kubenswrapper[4765]: I1203 21:22:27.156815 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:27 crc kubenswrapper[4765]: I1203 21:22:27.320230 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerStarted","Data":"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e"} Dec 03 21:22:27 crc kubenswrapper[4765]: I1203 21:22:27.626642 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:28 crc kubenswrapper[4765]: I1203 21:22:28.331331 4765 generic.go:334] "Generic (PLEG): container finished" podID="929c4c33-0001-409b-a278-6b07996300c1" containerID="fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e" exitCode=0 Dec 03 21:22:28 crc kubenswrapper[4765]: I1203 21:22:28.331428 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerDied","Data":"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e"} Dec 03 21:22:28 crc kubenswrapper[4765]: I1203 21:22:28.335857 4765 generic.go:334] "Generic (PLEG): container finished" podID="0666b38b-7b33-4361-b104-614c47f04ae4" containerID="1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52" exitCode=0 Dec 03 21:22:28 crc kubenswrapper[4765]: I1203 21:22:28.335906 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerDied","Data":"1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52"} Dec 03 21:22:28 crc kubenswrapper[4765]: I1203 21:22:28.335958 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerStarted","Data":"936802a633e86ee80bc668ad2f719a1361defb7565463533c22c3de7aa7836c8"} Dec 03 21:22:29 crc kubenswrapper[4765]: I1203 21:22:29.352555 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerStarted","Data":"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e"} Dec 03 21:22:29 crc kubenswrapper[4765]: I1203 21:22:29.357410 4765 generic.go:334] "Generic (PLEG): container finished" podID="0666b38b-7b33-4361-b104-614c47f04ae4" containerID="7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b" exitCode=0 Dec 03 21:22:29 crc kubenswrapper[4765]: I1203 21:22:29.357582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerDied","Data":"7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b"} Dec 03 21:22:29 crc kubenswrapper[4765]: I1203 21:22:29.372941 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjrsr" podStartSLOduration=2.9416956130000003 podStartE2EDuration="5.372922546s" podCreationTimestamp="2025-12-03 21:22:24 +0000 UTC" firstStartedPulling="2025-12-03 21:22:26.30920432 +0000 UTC m=+2644.239749471" lastFinishedPulling="2025-12-03 21:22:28.740431243 +0000 UTC m=+2646.670976404" observedRunningTime="2025-12-03 21:22:29.367879129 +0000 UTC m=+2647.298424280" watchObservedRunningTime="2025-12-03 21:22:29.372922546 +0000 UTC m=+2647.303467697" Dec 03 21:22:30 crc kubenswrapper[4765]: I1203 21:22:30.382039 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerStarted","Data":"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421"} Dec 03 21:22:30 crc kubenswrapper[4765]: I1203 21:22:30.407520 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bkc4k" podStartSLOduration=2.97340284 podStartE2EDuration="4.407501405s" podCreationTimestamp="2025-12-03 21:22:26 +0000 UTC" firstStartedPulling="2025-12-03 21:22:28.337640158 +0000 UTC m=+2646.268185309" lastFinishedPulling="2025-12-03 21:22:29.771738723 +0000 UTC m=+2647.702283874" observedRunningTime="2025-12-03 21:22:30.40180359 +0000 UTC m=+2648.332348741" watchObservedRunningTime="2025-12-03 21:22:30.407501405 +0000 UTC m=+2648.338046556" Dec 03 21:22:34 crc kubenswrapper[4765]: I1203 21:22:34.359648 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:22:34 crc kubenswrapper[4765]: E1203 21:22:34.362396 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:22:34 crc kubenswrapper[4765]: I1203 21:22:34.759277 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:34 crc kubenswrapper[4765]: I1203 21:22:34.759415 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:34 crc kubenswrapper[4765]: I1203 21:22:34.811866 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:35 crc kubenswrapper[4765]: I1203 21:22:35.497163 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:35 crc kubenswrapper[4765]: I1203 21:22:35.571747 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.157572 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.157966 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.245374 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.455251 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjrsr" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="registry-server" containerID="cri-o://b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e" gracePeriod=2 Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.530016 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.786045 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:37 crc kubenswrapper[4765]: I1203 21:22:37.997767 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.184719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content\") pod \"929c4c33-0001-409b-a278-6b07996300c1\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.185594 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities\") pod \"929c4c33-0001-409b-a278-6b07996300c1\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.185691 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmqt\" (UniqueName: \"kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt\") pod \"929c4c33-0001-409b-a278-6b07996300c1\" (UID: \"929c4c33-0001-409b-a278-6b07996300c1\") " Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.186228 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities" (OuterVolumeSpecName: "utilities") pod "929c4c33-0001-409b-a278-6b07996300c1" (UID: "929c4c33-0001-409b-a278-6b07996300c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.197772 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt" (OuterVolumeSpecName: "kube-api-access-dcmqt") pod "929c4c33-0001-409b-a278-6b07996300c1" (UID: "929c4c33-0001-409b-a278-6b07996300c1"). InnerVolumeSpecName "kube-api-access-dcmqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.245104 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "929c4c33-0001-409b-a278-6b07996300c1" (UID: "929c4c33-0001-409b-a278-6b07996300c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.288276 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.288350 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929c4c33-0001-409b-a278-6b07996300c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.288371 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmqt\" (UniqueName: \"kubernetes.io/projected/929c4c33-0001-409b-a278-6b07996300c1-kube-api-access-dcmqt\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.469980 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerDied","Data":"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e"} Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.470051 4765 scope.go:117] "RemoveContainer" containerID="b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.469901 4765 generic.go:334] "Generic (PLEG): container finished" podID="929c4c33-0001-409b-a278-6b07996300c1" containerID="b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e" exitCode=0 Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.470004 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjrsr" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.470258 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjrsr" event={"ID":"929c4c33-0001-409b-a278-6b07996300c1","Type":"ContainerDied","Data":"f892922bf5dac8bf1a893a843ab44f16e12abc0e5ce8a86209facfae40913d5e"} Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.498243 4765 scope.go:117] "RemoveContainer" containerID="fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e" Dec 03 21:22:38 crc kubenswrapper[4765]: E1203 21:22:38.512027 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod929c4c33_0001_409b_a278_6b07996300c1.slice\": RecentStats: unable to find data in memory cache]" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.518827 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.520616 4765 scope.go:117] "RemoveContainer" containerID="b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.525754 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjrsr"] Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.559436 4765 scope.go:117] "RemoveContainer" containerID="b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e" Dec 03 21:22:38 crc kubenswrapper[4765]: E1203 21:22:38.559840 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e\": container with ID starting with b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e not found: ID does not exist" containerID="b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.559898 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e"} err="failed to get container status \"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e\": rpc error: code = NotFound desc = could not find container \"b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e\": container with ID starting with b64ca25924add02f46875a9742fedab8e718d48b66ed9e28b340e62f438cc08e not found: ID does not exist" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.559931 4765 scope.go:117] "RemoveContainer" containerID="fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e" Dec 03 21:22:38 crc kubenswrapper[4765]: E1203 21:22:38.560368 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e\": container with ID starting with fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e not found: ID does not exist" containerID="fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.560398 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e"} err="failed to get container status \"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e\": rpc error: code = NotFound desc = could not find container \"fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e\": container with ID starting with fd95b3b84b1cc961ac7af35bc159374716faf5876879a17d3adb62965467ef4e not found: ID does not exist" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.560418 4765 scope.go:117] "RemoveContainer" containerID="b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932" Dec 03 21:22:38 crc kubenswrapper[4765]: E1203 21:22:38.560730 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932\": container with ID starting with b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932 not found: ID does not exist" containerID="b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932" Dec 03 21:22:38 crc kubenswrapper[4765]: I1203 21:22:38.560759 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932"} err="failed to get container status \"b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932\": rpc error: code = NotFound desc = could not find container \"b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932\": container with ID starting with b5a3e1879ddfbbc1cf55d350b3024a0296385e59ca0ca6adb866ed6975b58932 not found: ID does not exist" Dec 03 21:22:39 crc kubenswrapper[4765]: I1203 21:22:39.486917 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bkc4k" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="registry-server" containerID="cri-o://a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421" gracePeriod=2 Dec 03 21:22:39 crc kubenswrapper[4765]: I1203 21:22:39.970404 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.124286 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content\") pod \"0666b38b-7b33-4361-b104-614c47f04ae4\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.124633 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cfvq\" (UniqueName: \"kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq\") pod \"0666b38b-7b33-4361-b104-614c47f04ae4\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.124776 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities\") pod \"0666b38b-7b33-4361-b104-614c47f04ae4\" (UID: \"0666b38b-7b33-4361-b104-614c47f04ae4\") " Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.125729 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities" (OuterVolumeSpecName: "utilities") pod "0666b38b-7b33-4361-b104-614c47f04ae4" (UID: "0666b38b-7b33-4361-b104-614c47f04ae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.136189 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq" (OuterVolumeSpecName: "kube-api-access-5cfvq") pod "0666b38b-7b33-4361-b104-614c47f04ae4" (UID: "0666b38b-7b33-4361-b104-614c47f04ae4"). InnerVolumeSpecName "kube-api-access-5cfvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.147503 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0666b38b-7b33-4361-b104-614c47f04ae4" (UID: "0666b38b-7b33-4361-b104-614c47f04ae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.226421 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cfvq\" (UniqueName: \"kubernetes.io/projected/0666b38b-7b33-4361-b104-614c47f04ae4-kube-api-access-5cfvq\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.226461 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.226474 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0666b38b-7b33-4361-b104-614c47f04ae4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.373579 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929c4c33-0001-409b-a278-6b07996300c1" path="/var/lib/kubelet/pods/929c4c33-0001-409b-a278-6b07996300c1/volumes" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.499330 4765 generic.go:334] "Generic (PLEG): container finished" podID="0666b38b-7b33-4361-b104-614c47f04ae4" containerID="a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421" exitCode=0 Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.499371 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerDied","Data":"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421"} Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.499396 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkc4k" event={"ID":"0666b38b-7b33-4361-b104-614c47f04ae4","Type":"ContainerDied","Data":"936802a633e86ee80bc668ad2f719a1361defb7565463533c22c3de7aa7836c8"} Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.499416 4765 scope.go:117] "RemoveContainer" containerID="a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.499496 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkc4k" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.533581 4765 scope.go:117] "RemoveContainer" containerID="7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.538180 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.554493 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkc4k"] Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.566048 4765 scope.go:117] "RemoveContainer" containerID="1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.624158 4765 scope.go:117] "RemoveContainer" containerID="a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421" Dec 03 21:22:40 crc kubenswrapper[4765]: E1203 21:22:40.624772 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421\": container with ID starting with a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421 not found: ID does not exist" containerID="a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.624834 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421"} err="failed to get container status \"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421\": rpc error: code = NotFound desc = could not find container \"a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421\": container with ID starting with a30b3e67a42bdc1cec34655e6f3c3c8fb0ef350e5c47c66669532814e222c421 not found: ID does not exist" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.624868 4765 scope.go:117] "RemoveContainer" containerID="7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b" Dec 03 21:22:40 crc kubenswrapper[4765]: E1203 21:22:40.625721 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b\": container with ID starting with 7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b not found: ID does not exist" containerID="7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.625768 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b"} err="failed to get container status \"7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b\": rpc error: code = NotFound desc = could not find container \"7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b\": container with ID starting with 7ff5805719fac267516b26c3867647fd036130e02b1dd2ae61021c3f38a53e0b not found: ID does not exist" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.625799 4765 scope.go:117] "RemoveContainer" containerID="1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52" Dec 03 21:22:40 crc kubenswrapper[4765]: E1203 21:22:40.626217 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52\": container with ID starting with 1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52 not found: ID does not exist" containerID="1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52" Dec 03 21:22:40 crc kubenswrapper[4765]: I1203 21:22:40.626282 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52"} err="failed to get container status \"1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52\": rpc error: code = NotFound desc = could not find container \"1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52\": container with ID starting with 1e8839a1e025f56eb0d829b48da0ee1e661762a32ea422d19b2c495f85c82d52 not found: ID does not exist" Dec 03 21:22:42 crc kubenswrapper[4765]: I1203 21:22:42.404777 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" path="/var/lib/kubelet/pods/0666b38b-7b33-4361-b104-614c47f04ae4/volumes" Dec 03 21:22:48 crc kubenswrapper[4765]: I1203 21:22:48.360156 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:22:48 crc kubenswrapper[4765]: E1203 21:22:48.361052 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:22:59 crc kubenswrapper[4765]: I1203 21:22:59.359970 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:22:59 crc kubenswrapper[4765]: E1203 21:22:59.360821 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:23:11 crc kubenswrapper[4765]: I1203 21:23:11.360129 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:23:11 crc kubenswrapper[4765]: E1203 21:23:11.361168 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:23:24 crc kubenswrapper[4765]: I1203 21:23:24.399995 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:23:24 crc kubenswrapper[4765]: E1203 21:23:24.400839 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:23:38 crc kubenswrapper[4765]: I1203 21:23:38.360453 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:23:39 crc kubenswrapper[4765]: I1203 21:23:39.136551 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1"} Dec 03 21:23:50 crc kubenswrapper[4765]: I1203 21:23:50.265383 4765 generic.go:334] "Generic (PLEG): container finished" podID="52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" containerID="623c8e81c671954f3d1d5f34ff7cb5c80e31a3fca4d037b8493ae808d6a277a2" exitCode=0 Dec 03 21:23:50 crc kubenswrapper[4765]: I1203 21:23:50.265464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" event={"ID":"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96","Type":"ContainerDied","Data":"623c8e81c671954f3d1d5f34ff7cb5c80e31a3fca4d037b8493ae808d6a277a2"} Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.718607 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.866932 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.867986 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.868089 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.868151 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.868236 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx8p7\" (UniqueName: \"kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.868277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory\") pod \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\" (UID: \"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96\") " Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.873000 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.873573 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph" (OuterVolumeSpecName: "ceph") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.874404 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7" (OuterVolumeSpecName: "kube-api-access-wx8p7") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "kube-api-access-wx8p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.910252 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.913396 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.915779 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory" (OuterVolumeSpecName: "inventory") pod "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" (UID: "52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.975985 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.976040 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.976061 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx8p7\" (UniqueName: \"kubernetes.io/projected/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-kube-api-access-wx8p7\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.976080 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.976097 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:51 crc kubenswrapper[4765]: I1203 21:23:51.976114 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.288969 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" event={"ID":"52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96","Type":"ContainerDied","Data":"e5d81fa59c068bfd3b2609d9b18b43a0f2e6b429ec0648f1b18fc4bccbfd98aa"} Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.289033 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d81fa59c068bfd3b2609d9b18b43a0f2e6b429ec0648f1b18fc4bccbfd98aa" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.289029 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448210 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf"] Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448855 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="extract-utilities" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448869 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="extract-utilities" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448894 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="extract-content" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448902 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="extract-content" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448920 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448930 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448940 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448947 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448962 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="extract-utilities" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448969 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="extract-utilities" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.448988 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.448995 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: E1203 21:23:52.449013 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="extract-content" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449020 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="extract-content" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449241 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0666b38b-7b33-4361-b104-614c47f04ae4" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449255 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449287 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="929c4c33-0001-409b-a278-6b07996300c1" containerName="registry-server" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449857 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf"] Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.449934 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471030 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471054 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471237 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471243 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471371 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471526 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471673 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471840 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.471974 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jfdb8" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491623 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491713 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491799 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491820 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491850 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzc8x\" (UniqueName: \"kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491887 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491936 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491957 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.491983 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.492007 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.593862 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.593986 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594024 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594072 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594128 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594218 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594271 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594389 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594422 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594473 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzc8x\" (UniqueName: \"kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594912 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.594946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.602849 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.603840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.604392 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.605787 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.605884 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.607822 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.609018 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.610709 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.615997 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzc8x\" (UniqueName: \"kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:52 crc kubenswrapper[4765]: I1203 21:23:52.796861 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:23:53 crc kubenswrapper[4765]: I1203 21:23:53.373366 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf"] Dec 03 21:23:54 crc kubenswrapper[4765]: I1203 21:23:54.316368 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" event={"ID":"d13320a0-48f4-4813-9692-9554f411d998","Type":"ContainerStarted","Data":"3aff6dea07f53474f1cd5086de207f714d86483a9a6ab790a66d7ad0585e4f3f"} Dec 03 21:23:54 crc kubenswrapper[4765]: I1203 21:23:54.316802 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" event={"ID":"d13320a0-48f4-4813-9692-9554f411d998","Type":"ContainerStarted","Data":"adb4a72adecc35db08408e4470bbcfea1dac547aed47d9dc8607a1e58e91c574"} Dec 03 21:23:54 crc kubenswrapper[4765]: I1203 21:23:54.347097 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" podStartSLOduration=1.782017521 podStartE2EDuration="2.347075625s" podCreationTimestamp="2025-12-03 21:23:52 +0000 UTC" firstStartedPulling="2025-12-03 21:23:53.365861759 +0000 UTC m=+2731.296406920" lastFinishedPulling="2025-12-03 21:23:53.930919873 +0000 UTC m=+2731.861465024" observedRunningTime="2025-12-03 21:23:54.34163673 +0000 UTC m=+2732.272181881" watchObservedRunningTime="2025-12-03 21:23:54.347075625 +0000 UTC m=+2732.277620776" Dec 03 21:25:54 crc kubenswrapper[4765]: I1203 21:25:54.798671 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:25:54 crc kubenswrapper[4765]: I1203 21:25:54.799294 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:26:24 crc kubenswrapper[4765]: I1203 21:26:24.798505 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:26:24 crc kubenswrapper[4765]: I1203 21:26:24.799079 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.166492 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.170944 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.182659 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.281376 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26wh\" (UniqueName: \"kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.281481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.281581 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.383222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.383358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26wh\" (UniqueName: \"kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.383432 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.383732 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.383889 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.416169 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26wh\" (UniqueName: \"kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh\") pod \"redhat-operators-zxtxt\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.498679 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:26:53 crc kubenswrapper[4765]: I1203 21:26:53.977637 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.240945 4765 generic.go:334] "Generic (PLEG): container finished" podID="c1be2116-0972-40bf-979f-6868d1d62733" containerID="011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56" exitCode=0 Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.241020 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerDied","Data":"011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56"} Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.241424 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerStarted","Data":"2591778ea3aaf8d4eed94272e9e0384578e2ce207c213d878c9e21671c912be0"} Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.798236 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.798439 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:26:54 crc kubenswrapper[4765]: I1203 21:26:54.798627 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:26:55 crc kubenswrapper[4765]: I1203 21:26:55.255767 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:26:55 crc kubenswrapper[4765]: I1203 21:26:55.256223 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1" gracePeriod=600 Dec 03 21:26:55 crc kubenswrapper[4765]: I1203 21:26:55.258727 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerStarted","Data":"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d"} Dec 03 21:26:56 crc kubenswrapper[4765]: I1203 21:26:56.266785 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1" exitCode=0 Dec 03 21:26:56 crc kubenswrapper[4765]: I1203 21:26:56.266834 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1"} Dec 03 21:26:56 crc kubenswrapper[4765]: I1203 21:26:56.267171 4765 scope.go:117] "RemoveContainer" containerID="6e64fb2015a6338dd330f2adc5a0cb29cca34f4ad460721fbbbea1cf2966bdad" Dec 03 21:26:56 crc kubenswrapper[4765]: I1203 21:26:56.269682 4765 generic.go:334] "Generic (PLEG): container finished" podID="c1be2116-0972-40bf-979f-6868d1d62733" containerID="1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d" exitCode=0 Dec 03 21:26:56 crc kubenswrapper[4765]: I1203 21:26:56.269731 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerDied","Data":"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d"} Dec 03 21:26:58 crc kubenswrapper[4765]: I1203 21:26:58.301647 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de"} Dec 03 21:26:59 crc kubenswrapper[4765]: I1203 21:26:59.332459 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerStarted","Data":"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8"} Dec 03 21:26:59 crc kubenswrapper[4765]: I1203 21:26:59.379345 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxtxt" podStartSLOduration=3.841280581 podStartE2EDuration="6.379329245s" podCreationTimestamp="2025-12-03 21:26:53 +0000 UTC" firstStartedPulling="2025-12-03 21:26:54.242477236 +0000 UTC m=+2912.173022387" lastFinishedPulling="2025-12-03 21:26:56.78052586 +0000 UTC m=+2914.711071051" observedRunningTime="2025-12-03 21:26:59.378533093 +0000 UTC m=+2917.309078244" watchObservedRunningTime="2025-12-03 21:26:59.379329245 +0000 UTC m=+2917.309874396" Dec 03 21:27:03 crc kubenswrapper[4765]: I1203 21:27:03.499536 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:03 crc kubenswrapper[4765]: I1203 21:27:03.500072 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:04 crc kubenswrapper[4765]: I1203 21:27:04.548621 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zxtxt" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="registry-server" probeResult="failure" output=< Dec 03 21:27:04 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:27:04 crc kubenswrapper[4765]: > Dec 03 21:27:13 crc kubenswrapper[4765]: I1203 21:27:13.567978 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:14 crc kubenswrapper[4765]: I1203 21:27:14.052217 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:14 crc kubenswrapper[4765]: I1203 21:27:14.114844 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.527173 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxtxt" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="registry-server" containerID="cri-o://00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8" gracePeriod=2 Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.956745 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.972953 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities\") pod \"c1be2116-0972-40bf-979f-6868d1d62733\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.973339 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k26wh\" (UniqueName: \"kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh\") pod \"c1be2116-0972-40bf-979f-6868d1d62733\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.973400 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content\") pod \"c1be2116-0972-40bf-979f-6868d1d62733\" (UID: \"c1be2116-0972-40bf-979f-6868d1d62733\") " Dec 03 21:27:15 crc kubenswrapper[4765]: I1203 21:27:15.974888 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities" (OuterVolumeSpecName: "utilities") pod "c1be2116-0972-40bf-979f-6868d1d62733" (UID: "c1be2116-0972-40bf-979f-6868d1d62733"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.008804 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh" (OuterVolumeSpecName: "kube-api-access-k26wh") pod "c1be2116-0972-40bf-979f-6868d1d62733" (UID: "c1be2116-0972-40bf-979f-6868d1d62733"). InnerVolumeSpecName "kube-api-access-k26wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.075952 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k26wh\" (UniqueName: \"kubernetes.io/projected/c1be2116-0972-40bf-979f-6868d1d62733-kube-api-access-k26wh\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.076562 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.108961 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1be2116-0972-40bf-979f-6868d1d62733" (UID: "c1be2116-0972-40bf-979f-6868d1d62733"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.177130 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1be2116-0972-40bf-979f-6868d1d62733-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.564354 4765 generic.go:334] "Generic (PLEG): container finished" podID="c1be2116-0972-40bf-979f-6868d1d62733" containerID="00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8" exitCode=0 Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.564457 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerDied","Data":"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8"} Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.564500 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxtxt" event={"ID":"c1be2116-0972-40bf-979f-6868d1d62733","Type":"ContainerDied","Data":"2591778ea3aaf8d4eed94272e9e0384578e2ce207c213d878c9e21671c912be0"} Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.564493 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxtxt" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.564541 4765 scope.go:117] "RemoveContainer" containerID="00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.571334 4765 generic.go:334] "Generic (PLEG): container finished" podID="d13320a0-48f4-4813-9692-9554f411d998" containerID="3aff6dea07f53474f1cd5086de207f714d86483a9a6ab790a66d7ad0585e4f3f" exitCode=0 Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.571389 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" event={"ID":"d13320a0-48f4-4813-9692-9554f411d998","Type":"ContainerDied","Data":"3aff6dea07f53474f1cd5086de207f714d86483a9a6ab790a66d7ad0585e4f3f"} Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.611657 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.613543 4765 scope.go:117] "RemoveContainer" containerID="1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.629439 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxtxt"] Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.644714 4765 scope.go:117] "RemoveContainer" containerID="011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.686377 4765 scope.go:117] "RemoveContainer" containerID="00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8" Dec 03 21:27:16 crc kubenswrapper[4765]: E1203 21:27:16.686909 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8\": container with ID starting with 00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8 not found: ID does not exist" containerID="00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.686974 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8"} err="failed to get container status \"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8\": rpc error: code = NotFound desc = could not find container \"00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8\": container with ID starting with 00ecb230b1a2028503bf25a7b95190b2c04923b9bdca061cc5a3150c1a67abd8 not found: ID does not exist" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.687015 4765 scope.go:117] "RemoveContainer" containerID="1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d" Dec 03 21:27:16 crc kubenswrapper[4765]: E1203 21:27:16.687387 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d\": container with ID starting with 1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d not found: ID does not exist" containerID="1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.687479 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d"} err="failed to get container status \"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d\": rpc error: code = NotFound desc = could not find container \"1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d\": container with ID starting with 1145810d9fad4c607b2b73bb99f9ef9a310fdaf7aadd915cb6ba7ccde01c7a8d not found: ID does not exist" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.687559 4765 scope.go:117] "RemoveContainer" containerID="011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56" Dec 03 21:27:16 crc kubenswrapper[4765]: E1203 21:27:16.687922 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56\": container with ID starting with 011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56 not found: ID does not exist" containerID="011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56" Dec 03 21:27:16 crc kubenswrapper[4765]: I1203 21:27:16.687970 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56"} err="failed to get container status \"011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56\": rpc error: code = NotFound desc = could not find container \"011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56\": container with ID starting with 011fcdcb6d3e398b360b82d424dedb28169a8b86c3d1c397331562a24fbebd56 not found: ID does not exist" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.011611 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111629 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111722 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzc8x\" (UniqueName: \"kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111779 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111843 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111897 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.111965 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.112018 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.112058 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.112111 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.112143 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.112488 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph\") pod \"d13320a0-48f4-4813-9692-9554f411d998\" (UID: \"d13320a0-48f4-4813-9692-9554f411d998\") " Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.117922 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.118801 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph" (OuterVolumeSpecName: "ceph") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.132150 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x" (OuterVolumeSpecName: "kube-api-access-bzc8x") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "kube-api-access-bzc8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.137443 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.146210 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.146912 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.153093 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.154779 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.159686 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.163112 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.173646 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory" (OuterVolumeSpecName: "inventory") pod "d13320a0-48f4-4813-9692-9554f411d998" (UID: "d13320a0-48f4-4813-9692-9554f411d998"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214061 4765 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214084 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214093 4765 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214107 4765 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214116 4765 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214124 4765 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d13320a0-48f4-4813-9692-9554f411d998-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214132 4765 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214140 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214147 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214155 4765 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d13320a0-48f4-4813-9692-9554f411d998-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.214163 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzc8x\" (UniqueName: \"kubernetes.io/projected/d13320a0-48f4-4813-9692-9554f411d998-kube-api-access-bzc8x\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.371184 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1be2116-0972-40bf-979f-6868d1d62733" path="/var/lib/kubelet/pods/c1be2116-0972-40bf-979f-6868d1d62733/volumes" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.600173 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" event={"ID":"d13320a0-48f4-4813-9692-9554f411d998","Type":"ContainerDied","Data":"adb4a72adecc35db08408e4470bbcfea1dac547aed47d9dc8607a1e58e91c574"} Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.600247 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb4a72adecc35db08408e4470bbcfea1dac547aed47d9dc8607a1e58e91c574" Dec 03 21:27:18 crc kubenswrapper[4765]: I1203 21:27:18.600370 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.878721 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 03 21:27:32 crc kubenswrapper[4765]: E1203 21:27:32.879518 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13320a0-48f4-4813-9692-9554f411d998" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879535 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13320a0-48f4-4813-9692-9554f411d998" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 03 21:27:32 crc kubenswrapper[4765]: E1203 21:27:32.879572 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="registry-server" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879578 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="registry-server" Dec 03 21:27:32 crc kubenswrapper[4765]: E1203 21:27:32.879591 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="extract-utilities" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879599 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="extract-utilities" Dec 03 21:27:32 crc kubenswrapper[4765]: E1203 21:27:32.879610 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="extract-content" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879616 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="extract-content" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879762 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13320a0-48f4-4813-9692-9554f411d998" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.879851 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1be2116-0972-40bf-979f-6868d1d62733" containerName="registry-server" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.880741 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.883664 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.883865 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.891776 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.893342 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.899035 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.899891 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 03 21:27:32 crc kubenswrapper[4765]: I1203 21:27:32.929530 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.054983 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-ceph\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055020 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055039 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055056 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055075 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055109 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055136 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055157 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-lib-modules\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055173 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055190 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055204 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-scripts\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055218 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055273 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055286 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055321 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055338 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-run\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055359 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25cf\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-kube-api-access-t25cf\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-run\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-sys\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055456 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-dev\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055596 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055652 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055716 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055767 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nq2b\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-kube-api-access-2nq2b\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055791 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055813 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055848 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055882 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055936 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.055950 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.056003 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157244 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-sys\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157294 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-dev\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157348 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157386 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157394 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-sys\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157425 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157394 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-dev\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157503 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nq2b\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-kube-api-access-2nq2b\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157528 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157554 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157708 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157759 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157761 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157788 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157805 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157857 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157948 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.157858 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158080 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158111 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-ceph\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158898 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158927 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158946 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158968 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.158971 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-dev\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159002 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159027 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159071 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159115 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-lib-modules\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159135 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159156 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-scripts\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159191 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159245 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159269 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159327 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-run\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159356 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25cf\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-kube-api-access-t25cf\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159390 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-run\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159474 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-run\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159506 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159532 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-lib-modules\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159853 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6827558-2402-4d4f-b230-eb41101a3c41-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.159913 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-sys\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.160960 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.161006 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-run\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.161070 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/090dfe86-44b6-4444-9075-abfc758bc2e4-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.165053 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.165994 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.168034 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.168363 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-ceph\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.168541 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.169056 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.169766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-config-data\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.172829 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.173248 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090dfe86-44b6-4444-9075-abfc758bc2e4-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.179747 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6827558-2402-4d4f-b230-eb41101a3c41-scripts\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.190145 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nq2b\" (UniqueName: \"kubernetes.io/projected/a6827558-2402-4d4f-b230-eb41101a3c41-kube-api-access-2nq2b\") pod \"cinder-backup-0\" (UID: \"a6827558-2402-4d4f-b230-eb41101a3c41\") " pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.196926 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25cf\" (UniqueName: \"kubernetes.io/projected/090dfe86-44b6-4444-9075-abfc758bc2e4-kube-api-access-t25cf\") pod \"cinder-volume-volume1-0\" (UID: \"090dfe86-44b6-4444-9075-abfc758bc2e4\") " pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.209449 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.221646 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.468366 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-vmq9m"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.469743 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.478175 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-vmq9m"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.539099 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-f585-account-create-update-dnq4r"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.541268 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.543499 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.550856 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f585-account-create-update-dnq4r"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.566079 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.566131 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qbx\" (UniqueName: \"kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.587747 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.589693 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.593864 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.594130 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.594269 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jnl5v" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.599055 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.605278 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.679212 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lvj\" (UniqueName: \"kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.679274 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680049 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5qbx\" (UniqueName: \"kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680107 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dhbn\" (UniqueName: \"kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680206 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680250 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680310 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680356 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.680766 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.702919 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5qbx\" (UniqueName: \"kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx\") pod \"manila-db-create-vmq9m\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.702981 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.704560 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.718315 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.720385 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.725744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.725946 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-srrtt" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.726059 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.726219 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.745925 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.775983 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.817177 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.818079 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.821352 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.821680 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lvj\" (UniqueName: \"kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.821882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dhbn\" (UniqueName: \"kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.821996 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.825404 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.819280 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.825573 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.825709 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.826202 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.829662 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.835738 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.856236 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lvj\" (UniqueName: \"kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj\") pod \"horizon-5b9578d667-6257p\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.860684 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dhbn\" (UniqueName: \"kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn\") pod \"manila-f585-account-create-update-dnq4r\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.866051 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.896429 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.897981 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.904966 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.905167 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.905653 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.916090 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.927871 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.927932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.927961 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9b8v\" (UniqueName: \"kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928001 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928040 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928058 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928076 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928095 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928123 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928193 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928207 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928221 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928247 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgt4k\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.928318 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.931485 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:27:33 crc kubenswrapper[4765]: I1203 21:27:33.998442 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.032995 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033025 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033049 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033064 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033095 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033115 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033130 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033149 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033188 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5xz\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033221 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033236 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033254 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033279 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033306 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033321 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033342 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgt4k\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033379 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033415 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033439 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033453 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9b8v\" (UniqueName: \"kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.033483 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.039271 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.042886 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.045194 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.046839 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.047026 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.047277 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.047809 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.048268 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.049176 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.050074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.052653 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.059352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.066141 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgt4k\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.073685 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9b8v\" (UniqueName: \"kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v\") pod \"horizon-6f744fb785-k6zt9\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.084656 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.095989 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.133473 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.135457 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.136468 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.139620 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.135494 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140664 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140705 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140742 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5xz\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140878 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140902 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.140958 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.141179 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.141453 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.160358 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.160965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.188519 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5xz\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.193335 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.194248 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.216567 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.229927 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.351046 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-vmq9m"] Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.571496 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f585-account-create-update-dnq4r"] Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.715588 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.881583 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f585-account-create-update-dnq4r" event={"ID":"736bb387-1ef7-4b32-9421-c6c8133d3e3c","Type":"ContainerStarted","Data":"e7bbe396b0c678906f6ce22da8b34a3b92f64654ccd76e64dd972972239aad01"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.881860 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f585-account-create-update-dnq4r" event={"ID":"736bb387-1ef7-4b32-9421-c6c8133d3e3c","Type":"ContainerStarted","Data":"22fa99299623183a57800f072561cb58371f3f0645443440ae9f542aa9adca90"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.885813 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.887327 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"090dfe86-44b6-4444-9075-abfc758bc2e4","Type":"ContainerStarted","Data":"1501b07018b19f30ac65e94085e3b121ff3865e078464d36b3db4ea66d4eba81"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.888725 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a6827558-2402-4d4f-b230-eb41101a3c41","Type":"ContainerStarted","Data":"266be68cb7e6d2dbf3e070972b9e6f2a256e98c8373cf4faf3d8ae46d93a5616"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.890177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-vmq9m" event={"ID":"7dd082e4-2821-403e-a3c7-d25b6b09d645","Type":"ContainerStarted","Data":"b3b093fa882959b3cad20dad9f74dde41d8600aa14eb85fb0ddf0494880dc0c2"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.890199 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-vmq9m" event={"ID":"7dd082e4-2821-403e-a3c7-d25b6b09d645","Type":"ContainerStarted","Data":"d019a8bdcc7475aaad3f11d48519e4bcd68bc9a995ab149f842655a2d12a40da"} Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.902279 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-f585-account-create-update-dnq4r" podStartSLOduration=1.9022610599999998 podStartE2EDuration="1.90226106s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:34.897550912 +0000 UTC m=+2952.828096063" watchObservedRunningTime="2025-12-03 21:27:34.90226106 +0000 UTC m=+2952.832806211" Dec 03 21:27:34 crc kubenswrapper[4765]: I1203 21:27:34.914857 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-vmq9m" podStartSLOduration=1.9148385289999998 podStartE2EDuration="1.914838529s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:34.914334526 +0000 UTC m=+2952.844879677" watchObservedRunningTime="2025-12-03 21:27:34.914838529 +0000 UTC m=+2952.845383680" Dec 03 21:27:34 crc kubenswrapper[4765]: W1203 21:27:34.937467 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554f4c9_6d19_4486_a97d_c41f400aedd6.slice/crio-9162659a4460dc5efea9dcfafd04d8dcf5c6dd5cf7b98756ddd5ebd40aa0f2bb WatchSource:0}: Error finding container 9162659a4460dc5efea9dcfafd04d8dcf5c6dd5cf7b98756ddd5ebd40aa0f2bb: Status 404 returned error can't find the container with id 9162659a4460dc5efea9dcfafd04d8dcf5c6dd5cf7b98756ddd5ebd40aa0f2bb Dec 03 21:27:34 crc kubenswrapper[4765]: W1203 21:27:34.939094 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135f08a_3bde_41df_8f2b_1e910fa18b2d.slice/crio-e7f44c25f4042d19619068c087656061ecc37eea24fecc592970a69522350a41 WatchSource:0}: Error finding container e7f44c25f4042d19619068c087656061ecc37eea24fecc592970a69522350a41: Status 404 returned error can't find the container with id e7f44c25f4042d19619068c087656061ecc37eea24fecc592970a69522350a41 Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.088266 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.210350 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:35 crc kubenswrapper[4765]: W1203 21:27:35.239399 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bd340e0_235a_4017_838a_df96bcdd2532.slice/crio-ae18efa0341678efd04fa1ff3ab2babad4761a155bd4ae2cfb72866c09df7b08 WatchSource:0}: Error finding container ae18efa0341678efd04fa1ff3ab2babad4761a155bd4ae2cfb72866c09df7b08: Status 404 returned error can't find the container with id ae18efa0341678efd04fa1ff3ab2babad4761a155bd4ae2cfb72866c09df7b08 Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.900791 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerStarted","Data":"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.901393 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerStarted","Data":"1b7647079f516dab02c72d2cf2894df1a0c50f630f44a6d35ef22a6ea345e21e"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.904117 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a6827558-2402-4d4f-b230-eb41101a3c41","Type":"ContainerStarted","Data":"537168cc5a114a6d6c16ebdf1810f5e83d40ee0cf9896ac5401a9d757d0dc0bf"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.904143 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a6827558-2402-4d4f-b230-eb41101a3c41","Type":"ContainerStarted","Data":"bcb70d2aa2b0a9c46e3e4552084743efdec6405db8523e46aa713dc32092555a"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.907846 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerStarted","Data":"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.907870 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerStarted","Data":"ae18efa0341678efd04fa1ff3ab2babad4761a155bd4ae2cfb72866c09df7b08"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.909672 4765 generic.go:334] "Generic (PLEG): container finished" podID="7dd082e4-2821-403e-a3c7-d25b6b09d645" containerID="b3b093fa882959b3cad20dad9f74dde41d8600aa14eb85fb0ddf0494880dc0c2" exitCode=0 Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.909715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-vmq9m" event={"ID":"7dd082e4-2821-403e-a3c7-d25b6b09d645","Type":"ContainerDied","Data":"b3b093fa882959b3cad20dad9f74dde41d8600aa14eb85fb0ddf0494880dc0c2"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.911266 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerStarted","Data":"e7f44c25f4042d19619068c087656061ecc37eea24fecc592970a69522350a41"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.912898 4765 generic.go:334] "Generic (PLEG): container finished" podID="736bb387-1ef7-4b32-9421-c6c8133d3e3c" containerID="e7bbe396b0c678906f6ce22da8b34a3b92f64654ccd76e64dd972972239aad01" exitCode=0 Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.912935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f585-account-create-update-dnq4r" event={"ID":"736bb387-1ef7-4b32-9421-c6c8133d3e3c","Type":"ContainerDied","Data":"e7bbe396b0c678906f6ce22da8b34a3b92f64654ccd76e64dd972972239aad01"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.916580 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"090dfe86-44b6-4444-9075-abfc758bc2e4","Type":"ContainerStarted","Data":"0a7284e103435c7700476adca6e82e2cf7be7a6a5a2bf79b809a128b06a7f96a"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.916639 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"090dfe86-44b6-4444-9075-abfc758bc2e4","Type":"ContainerStarted","Data":"72731290b3549005ca90c32de86c16cea5cf27aecfe1e64ac6508494d6313b5e"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.934807 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerStarted","Data":"9162659a4460dc5efea9dcfafd04d8dcf5c6dd5cf7b98756ddd5ebd40aa0f2bb"} Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.939018 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.980121424 podStartE2EDuration="3.939003559s" podCreationTimestamp="2025-12-03 21:27:32 +0000 UTC" firstStartedPulling="2025-12-03 21:27:34.010021686 +0000 UTC m=+2951.940566837" lastFinishedPulling="2025-12-03 21:27:34.968903821 +0000 UTC m=+2952.899448972" observedRunningTime="2025-12-03 21:27:35.933782007 +0000 UTC m=+2953.864327158" watchObservedRunningTime="2025-12-03 21:27:35.939003559 +0000 UTC m=+2953.869548710" Dec 03 21:27:35 crc kubenswrapper[4765]: I1203 21:27:35.982690 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.9435235840000002 podStartE2EDuration="3.982674889s" podCreationTimestamp="2025-12-03 21:27:32 +0000 UTC" firstStartedPulling="2025-12-03 21:27:33.930786124 +0000 UTC m=+2951.861331265" lastFinishedPulling="2025-12-03 21:27:34.969937419 +0000 UTC m=+2952.900482570" observedRunningTime="2025-12-03 21:27:35.969626586 +0000 UTC m=+2953.900171737" watchObservedRunningTime="2025-12-03 21:27:35.982674889 +0000 UTC m=+2953.913220040" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.357428 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.431693 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.434936 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.442950 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.474671 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.493423 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510396 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510420 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzjz\" (UniqueName: \"kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510464 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510485 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510509 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.510530 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.546367 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.568246 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.575706 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-754897654-c5z9l"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.577261 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.584145 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-754897654-c5z9l"] Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.612621 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-config-data\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.612699 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhjh\" (UniqueName: \"kubernetes.io/projected/742566d1-3d02-42ea-8db1-e482ff699ada-kube-api-access-kfhjh\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.612787 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.612856 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-secret-key\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.612952 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613023 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzjz\" (UniqueName: \"kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613149 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613185 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613258 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613321 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-combined-ca-bundle\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613389 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613447 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-tls-certs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613500 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-scripts\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.613525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742566d1-3d02-42ea-8db1-e482ff699ada-logs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.614584 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.615045 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.615441 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.624891 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.624908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.627067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.632099 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzjz\" (UniqueName: \"kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz\") pod \"horizon-6c468b5ffd-8p2bd\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.718339 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-config-data\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.722490 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-config-data\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.722782 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhjh\" (UniqueName: \"kubernetes.io/projected/742566d1-3d02-42ea-8db1-e482ff699ada-kube-api-access-kfhjh\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.724782 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-secret-key\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.725384 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-combined-ca-bundle\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.725616 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-tls-certs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.726100 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-scripts\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.726830 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742566d1-3d02-42ea-8db1-e482ff699ada-logs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.726734 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742566d1-3d02-42ea-8db1-e482ff699ada-scripts\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.727192 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742566d1-3d02-42ea-8db1-e482ff699ada-logs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.731291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-combined-ca-bundle\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.737793 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-tls-certs\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.743690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhjh\" (UniqueName: \"kubernetes.io/projected/742566d1-3d02-42ea-8db1-e482ff699ada-kube-api-access-kfhjh\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.744254 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742566d1-3d02-42ea-8db1-e482ff699ada-horizon-secret-key\") pod \"horizon-754897654-c5z9l\" (UID: \"742566d1-3d02-42ea-8db1-e482ff699ada\") " pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.809015 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:36 crc kubenswrapper[4765]: I1203 21:27:36.912019 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.585781 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:27:37 crc kubenswrapper[4765]: W1203 21:27:37.615794 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a49be96_f6b0_4694_b6d1_24dbaf704602.slice/crio-5be1a2176904f3cb5d59d56fbb262443310ad42d8a77312dfe3db6fd0c685e7e WatchSource:0}: Error finding container 5be1a2176904f3cb5d59d56fbb262443310ad42d8a77312dfe3db6fd0c685e7e: Status 404 returned error can't find the container with id 5be1a2176904f3cb5d59d56fbb262443310ad42d8a77312dfe3db6fd0c685e7e Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.700614 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.701711 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.762079 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts\") pod \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.762168 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dhbn\" (UniqueName: \"kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn\") pod \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\" (UID: \"736bb387-1ef7-4b32-9421-c6c8133d3e3c\") " Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.762363 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts\") pod \"7dd082e4-2821-403e-a3c7-d25b6b09d645\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.762489 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5qbx\" (UniqueName: \"kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx\") pod \"7dd082e4-2821-403e-a3c7-d25b6b09d645\" (UID: \"7dd082e4-2821-403e-a3c7-d25b6b09d645\") " Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.765698 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "736bb387-1ef7-4b32-9421-c6c8133d3e3c" (UID: "736bb387-1ef7-4b32-9421-c6c8133d3e3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.766374 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dd082e4-2821-403e-a3c7-d25b6b09d645" (UID: "7dd082e4-2821-403e-a3c7-d25b6b09d645"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.773553 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx" (OuterVolumeSpecName: "kube-api-access-v5qbx") pod "7dd082e4-2821-403e-a3c7-d25b6b09d645" (UID: "7dd082e4-2821-403e-a3c7-d25b6b09d645"). InnerVolumeSpecName "kube-api-access-v5qbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.773635 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn" (OuterVolumeSpecName: "kube-api-access-5dhbn") pod "736bb387-1ef7-4b32-9421-c6c8133d3e3c" (UID: "736bb387-1ef7-4b32-9421-c6c8133d3e3c"). InnerVolumeSpecName "kube-api-access-5dhbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.829505 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-754897654-c5z9l"] Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.865141 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd082e4-2821-403e-a3c7-d25b6b09d645-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.865179 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5qbx\" (UniqueName: \"kubernetes.io/projected/7dd082e4-2821-403e-a3c7-d25b6b09d645-kube-api-access-v5qbx\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.865189 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/736bb387-1ef7-4b32-9421-c6c8133d3e3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.865197 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dhbn\" (UniqueName: \"kubernetes.io/projected/736bb387-1ef7-4b32-9421-c6c8133d3e3c-kube-api-access-5dhbn\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.969993 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerStarted","Data":"5be1a2176904f3cb5d59d56fbb262443310ad42d8a77312dfe3db6fd0c685e7e"} Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.972645 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerStarted","Data":"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551"} Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.972761 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-log" containerID="cri-o://4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" gracePeriod=30 Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.973149 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-httpd" containerID="cri-o://27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" gracePeriod=30 Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.979005 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-vmq9m" event={"ID":"7dd082e4-2821-403e-a3c7-d25b6b09d645","Type":"ContainerDied","Data":"d019a8bdcc7475aaad3f11d48519e4bcd68bc9a995ab149f842655a2d12a40da"} Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.979024 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-vmq9m" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.979028 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d019a8bdcc7475aaad3f11d48519e4bcd68bc9a995ab149f842655a2d12a40da" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.983247 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f585-account-create-update-dnq4r" event={"ID":"736bb387-1ef7-4b32-9421-c6c8133d3e3c","Type":"ContainerDied","Data":"22fa99299623183a57800f072561cb58371f3f0645443440ae9f542aa9adca90"} Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.983270 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22fa99299623183a57800f072561cb58371f3f0645443440ae9f542aa9adca90" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.983355 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f585-account-create-update-dnq4r" Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.991443 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerStarted","Data":"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d"} Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.991686 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-httpd" containerID="cri-o://677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" gracePeriod=30 Dec 03 21:27:37 crc kubenswrapper[4765]: I1203 21:27:37.991685 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-log" containerID="cri-o://d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" gracePeriod=30 Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.003236 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-754897654-c5z9l" event={"ID":"742566d1-3d02-42ea-8db1-e482ff699ada","Type":"ContainerStarted","Data":"02767ff9b288f4cecceb523771861c793397a323a5deb3e28da63dbf03a5c6d1"} Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.013197 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.013176735 podStartE2EDuration="5.013176735s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:37.998857598 +0000 UTC m=+2955.929402759" watchObservedRunningTime="2025-12-03 21:27:38.013176735 +0000 UTC m=+2955.943721906" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.028235 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.028219082 podStartE2EDuration="5.028219082s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:38.022623021 +0000 UTC m=+2955.953168172" watchObservedRunningTime="2025-12-03 21:27:38.028219082 +0000 UTC m=+2955.958764233" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.209576 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.222370 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.683246 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783402 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783533 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783557 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h5xz\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783614 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783635 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783648 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783808 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.783849 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle\") pod \"4bd340e0-235a-4017-838a-df96bcdd2532\" (UID: \"4bd340e0-235a-4017-838a-df96bcdd2532\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.787690 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs" (OuterVolumeSpecName: "logs") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.787729 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.792196 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.794891 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph" (OuterVolumeSpecName: "ceph") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.795565 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz" (OuterVolumeSpecName: "kube-api-access-2h5xz") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "kube-api-access-2h5xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.798406 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts" (OuterVolumeSpecName: "scripts") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.811619 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.818474 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.859375 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data" (OuterVolumeSpecName: "config-data") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.860214 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4bd340e0-235a-4017-838a-df96bcdd2532" (UID: "4bd340e0-235a-4017-838a-df96bcdd2532"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.896873 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.896992 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897050 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897120 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897186 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgt4k\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897212 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897244 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897265 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897294 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs\") pod \"44de8756-d46a-4580-a670-de4ca2c1fbdc\" (UID: \"44de8756-d46a-4580-a670-de4ca2c1fbdc\") " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897732 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897748 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897768 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897779 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h5xz\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-kube-api-access-2h5xz\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897788 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897797 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bd340e0-235a-4017-838a-df96bcdd2532-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897806 4765 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897813 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bd340e0-235a-4017-838a-df96bcdd2532-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.897821 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd340e0-235a-4017-838a-df96bcdd2532-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.902958 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.903147 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs" (OuterVolumeSpecName: "logs") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.906150 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts" (OuterVolumeSpecName: "scripts") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.906441 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.906832 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k" (OuterVolumeSpecName: "kube-api-access-kgt4k") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "kube-api-access-kgt4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.906871 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph" (OuterVolumeSpecName: "ceph") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.934802 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.939095 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.951427 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data" (OuterVolumeSpecName: "config-data") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:38 crc kubenswrapper[4765]: I1203 21:27:38.963374 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "44de8756-d46a-4580-a670-de4ca2c1fbdc" (UID: "44de8756-d46a-4580-a670-de4ca2c1fbdc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000324 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgt4k\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-kube-api-access-kgt4k\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000401 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000431 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000440 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000449 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44de8756-d46a-4580-a670-de4ca2c1fbdc-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000457 4765 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000468 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000478 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44de8756-d46a-4580-a670-de4ca2c1fbdc-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000489 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.000497 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44de8756-d46a-4580-a670-de4ca2c1fbdc-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016780 4765 generic.go:334] "Generic (PLEG): container finished" podID="4bd340e0-235a-4017-838a-df96bcdd2532" containerID="27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" exitCode=0 Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016813 4765 generic.go:334] "Generic (PLEG): container finished" podID="4bd340e0-235a-4017-838a-df96bcdd2532" containerID="4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" exitCode=143 Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016870 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerDied","Data":"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016877 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016899 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerDied","Data":"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016911 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bd340e0-235a-4017-838a-df96bcdd2532","Type":"ContainerDied","Data":"ae18efa0341678efd04fa1ff3ab2babad4761a155bd4ae2cfb72866c09df7b08"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.016927 4765 scope.go:117] "RemoveContainer" containerID="27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019831 4765 generic.go:334] "Generic (PLEG): container finished" podID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerID="677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" exitCode=0 Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019856 4765 generic.go:334] "Generic (PLEG): container finished" podID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerID="d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" exitCode=143 Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019877 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerDied","Data":"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019902 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerDied","Data":"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019916 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44de8756-d46a-4580-a670-de4ca2c1fbdc","Type":"ContainerDied","Data":"1b7647079f516dab02c72d2cf2894df1a0c50f630f44a6d35ef22a6ea345e21e"} Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.019966 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.025541 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.048659 4765 scope.go:117] "RemoveContainer" containerID="4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.080356 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.101669 4765 scope.go:117] "RemoveContainer" containerID="27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.102073 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551\": container with ID starting with 27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551 not found: ID does not exist" containerID="27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102112 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551"} err="failed to get container status \"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551\": rpc error: code = NotFound desc = could not find container \"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551\": container with ID starting with 27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551 not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102132 4765 scope.go:117] "RemoveContainer" containerID="4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.102352 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65\": container with ID starting with 4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65 not found: ID does not exist" containerID="4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102369 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65"} err="failed to get container status \"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65\": rpc error: code = NotFound desc = could not find container \"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65\": container with ID starting with 4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65 not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102384 4765 scope.go:117] "RemoveContainer" containerID="27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102528 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551"} err="failed to get container status \"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551\": rpc error: code = NotFound desc = could not find container \"27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551\": container with ID starting with 27e5fe26d27eb893669320d5eafc6edc1caab1ca0458833ea839145bb12a5551 not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102541 4765 scope.go:117] "RemoveContainer" containerID="4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102717 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65"} err="failed to get container status \"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65\": rpc error: code = NotFound desc = could not find container \"4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65\": container with ID starting with 4d6e80c4666e02e7d0c1d677ac12816e0ea9097c1e66bbe60c4e427107e10e65 not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102730 4765 scope.go:117] "RemoveContainer" containerID="677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.102840 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.113373 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.136326 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137339 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736bb387-1ef7-4b32-9421-c6c8133d3e3c" containerName="mariadb-account-create-update" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137360 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="736bb387-1ef7-4b32-9421-c6c8133d3e3c" containerName="mariadb-account-create-update" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137374 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137380 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137402 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd082e4-2821-403e-a3c7-d25b6b09d645" containerName="mariadb-database-create" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137425 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd082e4-2821-403e-a3c7-d25b6b09d645" containerName="mariadb-database-create" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137440 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137468 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137489 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137509 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.137520 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137528 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137797 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137807 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd082e4-2821-403e-a3c7-d25b6b09d645" containerName="mariadb-database-create" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137816 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="736bb387-1ef7-4b32-9421-c6c8133d3e3c" containerName="mariadb-account-create-update" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137833 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137840 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" containerName="glance-log" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.137849 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" containerName="glance-httpd" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.138831 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.144330 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.144534 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-srrtt" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.144734 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.147856 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.150360 4765 scope.go:117] "RemoveContainer" containerID="d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.152709 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.174218 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.203965 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209133 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209175 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209219 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209264 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209288 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209362 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209423 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.209460 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5mc\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-kube-api-access-nc5mc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.223006 4765 scope.go:117] "RemoveContainer" containerID="677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.225761 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d\": container with ID starting with 677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d not found: ID does not exist" containerID="677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.225830 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d"} err="failed to get container status \"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d\": rpc error: code = NotFound desc = could not find container \"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d\": container with ID starting with 677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.225860 4765 scope.go:117] "RemoveContainer" containerID="d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" Dec 03 21:27:39 crc kubenswrapper[4765]: E1203 21:27:39.226128 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a\": container with ID starting with d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a not found: ID does not exist" containerID="d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.226164 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a"} err="failed to get container status \"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a\": rpc error: code = NotFound desc = could not find container \"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a\": container with ID starting with d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.226183 4765 scope.go:117] "RemoveContainer" containerID="677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.226553 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d"} err="failed to get container status \"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d\": rpc error: code = NotFound desc = could not find container \"677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d\": container with ID starting with 677b8a43936e005f59d8293100f73c342ff032198af1f877babe9c0e9b476b2d not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.226577 4765 scope.go:117] "RemoveContainer" containerID="d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.227018 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a"} err="failed to get container status \"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a\": rpc error: code = NotFound desc = could not find container \"d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a\": container with ID starting with d5667a0c3fed75a8feaf63e2db865b079f826fbadb84491d55f50628eac7672a not found: ID does not exist" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.233464 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.236885 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.239397 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.239649 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.254668 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311139 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311165 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311242 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311267 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5mc\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-kube-api-access-nc5mc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311317 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.311342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.312200 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.312222 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.312321 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313274 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313352 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313436 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313527 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-646qr\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-kube-api-access-646qr\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313618 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.313650 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-logs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.315640 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d1fba0-111f-49ed-9992-e75c8f53d277-logs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.318983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.324006 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.330922 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.331290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.332883 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d1fba0-111f-49ed-9992-e75c8f53d277-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.355400 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5mc\" (UniqueName: \"kubernetes.io/projected/d2d1fba0-111f-49ed-9992-e75c8f53d277-kube-api-access-nc5mc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.372257 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"d2d1fba0-111f-49ed-9992-e75c8f53d277\") " pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.416887 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-logs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.416974 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.416997 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417048 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417145 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417182 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417223 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417246 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-646qr\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-kube-api-access-646qr\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.417405 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-logs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.418115 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.418291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43154ec4-ba15-4d12-afeb-a3528c1269c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.422586 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.423209 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.433808 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.453385 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43154ec4-ba15-4d12-afeb-a3528c1269c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.456008 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-ceph\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.462175 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-646qr\" (UniqueName: \"kubernetes.io/projected/43154ec4-ba15-4d12-afeb-a3528c1269c8-kube-api-access-646qr\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.486771 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.491699 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43154ec4-ba15-4d12-afeb-a3528c1269c8\") " pod="openstack/glance-default-external-api-0" Dec 03 21:27:39 crc kubenswrapper[4765]: I1203 21:27:39.599953 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 21:27:40 crc kubenswrapper[4765]: I1203 21:27:40.019106 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 21:27:40 crc kubenswrapper[4765]: W1203 21:27:40.023511 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2d1fba0_111f_49ed_9992_e75c8f53d277.slice/crio-17d6245eae46c2a6684fa993245ab898cdf54e04161add525ce4506c9b1ef673 WatchSource:0}: Error finding container 17d6245eae46c2a6684fa993245ab898cdf54e04161add525ce4506c9b1ef673: Status 404 returned error can't find the container with id 17d6245eae46c2a6684fa993245ab898cdf54e04161add525ce4506c9b1ef673 Dec 03 21:27:40 crc kubenswrapper[4765]: I1203 21:27:40.240453 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 21:27:40 crc kubenswrapper[4765]: I1203 21:27:40.381575 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44de8756-d46a-4580-a670-de4ca2c1fbdc" path="/var/lib/kubelet/pods/44de8756-d46a-4580-a670-de4ca2c1fbdc/volumes" Dec 03 21:27:40 crc kubenswrapper[4765]: I1203 21:27:40.382448 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd340e0-235a-4017-838a-df96bcdd2532" path="/var/lib/kubelet/pods/4bd340e0-235a-4017-838a-df96bcdd2532/volumes" Dec 03 21:27:41 crc kubenswrapper[4765]: I1203 21:27:41.048484 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d1fba0-111f-49ed-9992-e75c8f53d277","Type":"ContainerStarted","Data":"25f0381c79510344b7893921d51efdd1d8429fe5835bf2e16de7a85df1a8119c"} Dec 03 21:27:41 crc kubenswrapper[4765]: I1203 21:27:41.049121 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d1fba0-111f-49ed-9992-e75c8f53d277","Type":"ContainerStarted","Data":"17d6245eae46c2a6684fa993245ab898cdf54e04161add525ce4506c9b1ef673"} Dec 03 21:27:41 crc kubenswrapper[4765]: I1203 21:27:41.053042 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43154ec4-ba15-4d12-afeb-a3528c1269c8","Type":"ContainerStarted","Data":"407f2f39834936d608a44b647d9b380b676bb56a26c5966297bbcb05462e8191"} Dec 03 21:27:41 crc kubenswrapper[4765]: I1203 21:27:41.053106 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43154ec4-ba15-4d12-afeb-a3528c1269c8","Type":"ContainerStarted","Data":"e25c9a869e756378da045d10c7e167bf704d69b6b91d8a032a992da0c720cd94"} Dec 03 21:27:42 crc kubenswrapper[4765]: I1203 21:27:42.066508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d2d1fba0-111f-49ed-9992-e75c8f53d277","Type":"ContainerStarted","Data":"88171736521b3c99a19fd222ba13c5db4d4000633494ee0ee59ec9414b37df3b"} Dec 03 21:27:42 crc kubenswrapper[4765]: I1203 21:27:42.096074 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.09605011 podStartE2EDuration="3.09605011s" podCreationTimestamp="2025-12-03 21:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:42.088442894 +0000 UTC m=+2960.018988055" watchObservedRunningTime="2025-12-03 21:27:42.09605011 +0000 UTC m=+2960.026595271" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.403673 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.421669 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.978674 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-wvfd8"] Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.980244 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.982402 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hc4kg" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.987621 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 03 21:27:43 crc kubenswrapper[4765]: I1203 21:27:43.987669 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-wvfd8"] Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.058767 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.059192 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.059495 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzs6\" (UniqueName: \"kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.059698 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.161077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.161177 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.161218 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.161326 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzs6\" (UniqueName: \"kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.172023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.176023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.177189 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.179133 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzs6\" (UniqueName: \"kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6\") pod \"manila-db-sync-wvfd8\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:44 crc kubenswrapper[4765]: I1203 21:27:44.301056 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wvfd8" Dec 03 21:27:46 crc kubenswrapper[4765]: I1203 21:27:46.809594 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-wvfd8"] Dec 03 21:27:46 crc kubenswrapper[4765]: W1203 21:27:46.819252 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a10b13_ab07_4448_bbaa_f2077c07c07d.slice/crio-93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68 WatchSource:0}: Error finding container 93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68: Status 404 returned error can't find the container with id 93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68 Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.113031 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wvfd8" event={"ID":"53a10b13-ab07-4448-bbaa-f2077c07c07d","Type":"ContainerStarted","Data":"93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.116489 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43154ec4-ba15-4d12-afeb-a3528c1269c8","Type":"ContainerStarted","Data":"60feb6e1af9f656106bb7ddf907d47048d6ce24b514339b659bc057056548468"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.120116 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerStarted","Data":"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.120167 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerStarted","Data":"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.123107 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerStarted","Data":"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.123097 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9578d667-6257p" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon-log" containerID="cri-o://d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" gracePeriod=30 Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.123173 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9578d667-6257p" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon" containerID="cri-o://33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" gracePeriod=30 Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.123150 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerStarted","Data":"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.128417 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerStarted","Data":"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.128449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerStarted","Data":"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.128565 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f744fb785-k6zt9" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon-log" containerID="cri-o://1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" gracePeriod=30 Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.128665 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f744fb785-k6zt9" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon" containerID="cri-o://ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" gracePeriod=30 Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.135973 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-754897654-c5z9l" event={"ID":"742566d1-3d02-42ea-8db1-e482ff699ada","Type":"ContainerStarted","Data":"3fe65384cffad17632f1dbf1519d03d33adf9991370ec8fbe2b865d47bffb749"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.136017 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-754897654-c5z9l" event={"ID":"742566d1-3d02-42ea-8db1-e482ff699ada","Type":"ContainerStarted","Data":"ced9e6d86d11fbb66ac7b202f36600005c25a96173633ecdc1220968957d3547"} Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.155623 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.15560015 podStartE2EDuration="8.15560015s" podCreationTimestamp="2025-12-03 21:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:27:47.137171311 +0000 UTC m=+2965.067716462" watchObservedRunningTime="2025-12-03 21:27:47.15560015 +0000 UTC m=+2965.086145321" Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.166048 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-754897654-c5z9l" podStartSLOduration=2.69301759 podStartE2EDuration="11.166027292s" podCreationTimestamp="2025-12-03 21:27:36 +0000 UTC" firstStartedPulling="2025-12-03 21:27:37.855067983 +0000 UTC m=+2955.785613134" lastFinishedPulling="2025-12-03 21:27:46.328077685 +0000 UTC m=+2964.258622836" observedRunningTime="2025-12-03 21:27:47.156139694 +0000 UTC m=+2965.086684845" watchObservedRunningTime="2025-12-03 21:27:47.166027292 +0000 UTC m=+2965.096572443" Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.197022 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b9578d667-6257p" podStartSLOduration=2.812591012 podStartE2EDuration="14.197002789s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="2025-12-03 21:27:34.943600257 +0000 UTC m=+2952.874145408" lastFinishedPulling="2025-12-03 21:27:46.328012034 +0000 UTC m=+2964.258557185" observedRunningTime="2025-12-03 21:27:47.182323372 +0000 UTC m=+2965.112868543" watchObservedRunningTime="2025-12-03 21:27:47.197002789 +0000 UTC m=+2965.127547940" Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.215505 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f744fb785-k6zt9" podStartSLOduration=2.765602703 podStartE2EDuration="14.215488628s" podCreationTimestamp="2025-12-03 21:27:33 +0000 UTC" firstStartedPulling="2025-12-03 21:27:34.943574677 +0000 UTC m=+2952.874119828" lastFinishedPulling="2025-12-03 21:27:46.393460602 +0000 UTC m=+2964.324005753" observedRunningTime="2025-12-03 21:27:47.201151621 +0000 UTC m=+2965.131696772" watchObservedRunningTime="2025-12-03 21:27:47.215488628 +0000 UTC m=+2965.146033769" Dec 03 21:27:47 crc kubenswrapper[4765]: I1203 21:27:47.229404 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c468b5ffd-8p2bd" podStartSLOduration=2.478906861 podStartE2EDuration="11.229382513s" podCreationTimestamp="2025-12-03 21:27:36 +0000 UTC" firstStartedPulling="2025-12-03 21:27:37.623176445 +0000 UTC m=+2955.553721596" lastFinishedPulling="2025-12-03 21:27:46.373652097 +0000 UTC m=+2964.304197248" observedRunningTime="2025-12-03 21:27:47.226554767 +0000 UTC m=+2965.157099938" watchObservedRunningTime="2025-12-03 21:27:47.229382513 +0000 UTC m=+2965.159927664" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.487990 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.488604 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.533109 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.533804 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.600929 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.600979 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.634031 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 21:27:49 crc kubenswrapper[4765]: I1203 21:27:49.649107 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 21:27:50 crc kubenswrapper[4765]: I1203 21:27:50.164763 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:50 crc kubenswrapper[4765]: I1203 21:27:50.164809 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 21:27:50 crc kubenswrapper[4765]: I1203 21:27:50.164822 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 21:27:50 crc kubenswrapper[4765]: I1203 21:27:50.164832 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:53 crc kubenswrapper[4765]: I1203 21:27:53.916942 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.086170 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.142064 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.142173 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.191943 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.330460 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.330605 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 21:27:54 crc kubenswrapper[4765]: I1203 21:27:54.335656 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 21:27:55 crc kubenswrapper[4765]: I1203 21:27:55.269201 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wvfd8" event={"ID":"53a10b13-ab07-4448-bbaa-f2077c07c07d","Type":"ContainerStarted","Data":"fdc2a765c2fc31debde1ed8555074faabb99f41382ad8db03ddcf0e0774e2614"} Dec 03 21:27:55 crc kubenswrapper[4765]: I1203 21:27:55.290162 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-wvfd8" podStartSLOduration=4.66240036 podStartE2EDuration="12.290147478s" podCreationTimestamp="2025-12-03 21:27:43 +0000 UTC" firstStartedPulling="2025-12-03 21:27:46.822414545 +0000 UTC m=+2964.752959696" lastFinishedPulling="2025-12-03 21:27:54.450161663 +0000 UTC m=+2972.380706814" observedRunningTime="2025-12-03 21:27:55.288502685 +0000 UTC m=+2973.219047856" watchObservedRunningTime="2025-12-03 21:27:55.290147478 +0000 UTC m=+2973.220692619" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.809986 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.810341 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.812256 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.238:8443: connect: connection refused" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.912832 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.912897 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:27:56 crc kubenswrapper[4765]: I1203 21:27:56.914471 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-754897654-c5z9l" podUID="742566d1-3d02-42ea-8db1-e482ff699ada" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.239:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.239:8443: connect: connection refused" Dec 03 21:28:04 crc kubenswrapper[4765]: I1203 21:28:04.375016 4765 generic.go:334] "Generic (PLEG): container finished" podID="53a10b13-ab07-4448-bbaa-f2077c07c07d" containerID="fdc2a765c2fc31debde1ed8555074faabb99f41382ad8db03ddcf0e0774e2614" exitCode=0 Dec 03 21:28:04 crc kubenswrapper[4765]: I1203 21:28:04.379349 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wvfd8" event={"ID":"53a10b13-ab07-4448-bbaa-f2077c07c07d","Type":"ContainerDied","Data":"fdc2a765c2fc31debde1ed8555074faabb99f41382ad8db03ddcf0e0774e2614"} Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.808772 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wvfd8" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.846226 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data\") pod \"53a10b13-ab07-4448-bbaa-f2077c07c07d\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.846331 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle\") pod \"53a10b13-ab07-4448-bbaa-f2077c07c07d\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.846398 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzs6\" (UniqueName: \"kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6\") pod \"53a10b13-ab07-4448-bbaa-f2077c07c07d\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.846428 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data\") pod \"53a10b13-ab07-4448-bbaa-f2077c07c07d\" (UID: \"53a10b13-ab07-4448-bbaa-f2077c07c07d\") " Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.851825 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "53a10b13-ab07-4448-bbaa-f2077c07c07d" (UID: "53a10b13-ab07-4448-bbaa-f2077c07c07d"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.855313 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6" (OuterVolumeSpecName: "kube-api-access-kmzs6") pod "53a10b13-ab07-4448-bbaa-f2077c07c07d" (UID: "53a10b13-ab07-4448-bbaa-f2077c07c07d"). InnerVolumeSpecName "kube-api-access-kmzs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.855824 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data" (OuterVolumeSpecName: "config-data") pod "53a10b13-ab07-4448-bbaa-f2077c07c07d" (UID: "53a10b13-ab07-4448-bbaa-f2077c07c07d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.880681 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a10b13-ab07-4448-bbaa-f2077c07c07d" (UID: "53a10b13-ab07-4448-bbaa-f2077c07c07d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.949226 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.949265 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.949325 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzs6\" (UniqueName: \"kubernetes.io/projected/53a10b13-ab07-4448-bbaa-f2077c07c07d-kube-api-access-kmzs6\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:05 crc kubenswrapper[4765]: I1203 21:28:05.949339 4765 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/53a10b13-ab07-4448-bbaa-f2077c07c07d-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.404535 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wvfd8" event={"ID":"53a10b13-ab07-4448-bbaa-f2077c07c07d","Type":"ContainerDied","Data":"93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68"} Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.404584 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93692eddb51b37b9caf24251e5b34d5cf5de06dbe0d572c635253eedb95b5b68" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.404595 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wvfd8" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.798873 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:06 crc kubenswrapper[4765]: E1203 21:28:06.799502 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a10b13-ab07-4448-bbaa-f2077c07c07d" containerName="manila-db-sync" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.799539 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a10b13-ab07-4448-bbaa-f2077c07c07d" containerName="manila-db-sync" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.799885 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a10b13-ab07-4448-bbaa-f2077c07c07d" containerName="manila-db-sync" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.801591 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.803832 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hc4kg" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.804068 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.805090 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.809060 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.810919 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.811000 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.814468 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.826508 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.868673 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.869569 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.869639 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.869796 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dhfq\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.869845 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.869911 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870520 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870611 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870641 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870722 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrcv\" (UniqueName: \"kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870859 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870893 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.870939 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.871018 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.961215 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hv49c"] Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.963099 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972590 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dhfq\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972725 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972751 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972768 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972801 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972833 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972862 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrcv\" (UniqueName: \"kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972915 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972956 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.972998 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.973033 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.973067 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.973086 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.973330 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.975561 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.975649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.981371 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.982256 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.986846 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.987892 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.988651 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.988698 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.991195 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:06 crc kubenswrapper[4765]: I1203 21:28:06.999503 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.000571 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.000612 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hv49c"] Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.004979 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dhfq\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq\") pod \"manila-share-share1-0\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.019827 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrcv\" (UniqueName: \"kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv\") pod \"manila-scheduler-0\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.079111 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.079192 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-config\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.095681 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6l2l\" (UniqueName: \"kubernetes.io/projected/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-kube-api-access-v6l2l\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.103884 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.103987 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.104087 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.125001 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.126755 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.135957 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.143181 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.143508 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.167916 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205423 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205471 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205527 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205573 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205631 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-config\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205680 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6l2l\" (UniqueName: \"kubernetes.io/projected/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-kube-api-access-v6l2l\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205704 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqs86\" (UniqueName: \"kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205763 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205779 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205796 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.205815 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.206794 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.206824 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-config\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.207627 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.207745 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.208187 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.224786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6l2l\" (UniqueName: \"kubernetes.io/projected/bcfd57de-3b61-4a34-a4a3-c7808baedc2d-kube-api-access-v6l2l\") pod \"dnsmasq-dns-69655fd4bf-hv49c\" (UID: \"bcfd57de-3b61-4a34-a4a3-c7808baedc2d\") " pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307148 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307405 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307447 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307511 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqs86\" (UniqueName: \"kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307604 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.307916 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.308291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.313851 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.316966 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.317109 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.329194 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqs86\" (UniqueName: \"kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.329888 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts\") pod \"manila-api-0\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.386260 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.607027 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.894817 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.908553 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:07 crc kubenswrapper[4765]: I1203 21:28:07.984694 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hv49c"] Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.210040 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:08 crc kubenswrapper[4765]: W1203 21:28:08.218690 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f00c2ec_4011_4de9_a016_3eaeff33a391.slice/crio-750d2839db1f78f146c3f9da1c9a3e3219d81f00e655e730315e73630a7026b6 WatchSource:0}: Error finding container 750d2839db1f78f146c3f9da1c9a3e3219d81f00e655e730315e73630a7026b6: Status 404 returned error can't find the container with id 750d2839db1f78f146c3f9da1c9a3e3219d81f00e655e730315e73630a7026b6 Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.451797 4765 generic.go:334] "Generic (PLEG): container finished" podID="bcfd57de-3b61-4a34-a4a3-c7808baedc2d" containerID="71868f0fc5129a4db22b556d97a55353b664fa1690c41d7198f2e600064af3d2" exitCode=0 Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.451869 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" event={"ID":"bcfd57de-3b61-4a34-a4a3-c7808baedc2d","Type":"ContainerDied","Data":"71868f0fc5129a4db22b556d97a55353b664fa1690c41d7198f2e600064af3d2"} Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.451895 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" event={"ID":"bcfd57de-3b61-4a34-a4a3-c7808baedc2d","Type":"ContainerStarted","Data":"30a35263257c09be79d1294b27ab8b7fd986a456f0611972d818095fb6a36389"} Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.454451 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerStarted","Data":"bce983a18521d025ae71e3d5d196fe0477cc86d17c779d50ccbcb22f55d6e5c7"} Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.465903 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerStarted","Data":"bbd126d8ac08ff7bde047e07283e73d71c2bc6611f44663befb8486e18bf7812"} Dec 03 21:28:08 crc kubenswrapper[4765]: I1203 21:28:08.499858 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerStarted","Data":"750d2839db1f78f146c3f9da1c9a3e3219d81f00e655e730315e73630a7026b6"} Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.145173 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.178576 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.542932 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerStarted","Data":"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e"} Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.547760 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerStarted","Data":"db28728cd0cc07478a7c8ce2758f775785bc0fc23adc6d13809039517355127f"} Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.547789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerStarted","Data":"2d7f8ae8cabdbfc2775ae8b50712ab4e471856e17cba63ec56dc52912aa4a832"} Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.549244 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.553043 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" event={"ID":"bcfd57de-3b61-4a34-a4a3-c7808baedc2d","Type":"ContainerStarted","Data":"9228f56af98a3386b70062d70258008b39cff0f9fc5c0d1c87fdda1279eb84d3"} Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.553581 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.604575 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.604556568 podStartE2EDuration="2.604556568s" podCreationTimestamp="2025-12-03 21:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:28:09.587381536 +0000 UTC m=+2987.517926687" watchObservedRunningTime="2025-12-03 21:28:09.604556568 +0000 UTC m=+2987.535101719" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.678734 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" podStartSLOduration=3.678713798 podStartE2EDuration="3.678713798s" podCreationTimestamp="2025-12-03 21:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:28:09.652691431 +0000 UTC m=+2987.583236602" watchObservedRunningTime="2025-12-03 21:28:09.678713798 +0000 UTC m=+2987.609258949" Dec 03 21:28:09 crc kubenswrapper[4765]: I1203 21:28:09.932651 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:10 crc kubenswrapper[4765]: I1203 21:28:10.569505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerStarted","Data":"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738"} Dec 03 21:28:10 crc kubenswrapper[4765]: I1203 21:28:10.592524 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.904316271 podStartE2EDuration="4.59250798s" podCreationTimestamp="2025-12-03 21:28:06 +0000 UTC" firstStartedPulling="2025-12-03 21:28:07.899263797 +0000 UTC m=+2985.829808948" lastFinishedPulling="2025-12-03 21:28:08.587455506 +0000 UTC m=+2986.518000657" observedRunningTime="2025-12-03 21:28:10.584672929 +0000 UTC m=+2988.515218080" watchObservedRunningTime="2025-12-03 21:28:10.59250798 +0000 UTC m=+2988.523053131" Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.215617 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-754897654-c5z9l" Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.266140 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.266370 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon-log" containerID="cri-o://bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26" gracePeriod=30 Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.266717 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" containerID="cri-o://2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635" gracePeriod=30 Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.271701 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.578608 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api-log" containerID="cri-o://2d7f8ae8cabdbfc2775ae8b50712ab4e471856e17cba63ec56dc52912aa4a832" gracePeriod=30 Dec 03 21:28:11 crc kubenswrapper[4765]: I1203 21:28:11.579005 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api" containerID="cri-o://db28728cd0cc07478a7c8ce2758f775785bc0fc23adc6d13809039517355127f" gracePeriod=30 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.425471 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.426488 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-central-agent" containerID="cri-o://65946a80c04385d40c2df28026342005506998cf88de180927e9cca2b0c29b45" gracePeriod=30 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.426493 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="proxy-httpd" containerID="cri-o://d365a1fc3f649a27cbb47310882ccb510942f2e48e6df5fa64da72415c895cf7" gracePeriod=30 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.426539 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="sg-core" containerID="cri-o://36408eb3cf5a2a3300b72d4e20e962779d4da944dfc3c0a8a6e2a98a8b4261ef" gracePeriod=30 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.426547 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-notification-agent" containerID="cri-o://9608b9411b03ef15d0cc3502e92caeec8958997628bf51461ffde67a508dcfe3" gracePeriod=30 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.586490 4765 generic.go:334] "Generic (PLEG): container finished" podID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerID="db28728cd0cc07478a7c8ce2758f775785bc0fc23adc6d13809039517355127f" exitCode=0 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.586520 4765 generic.go:334] "Generic (PLEG): container finished" podID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerID="2d7f8ae8cabdbfc2775ae8b50712ab4e471856e17cba63ec56dc52912aa4a832" exitCode=143 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.586558 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerDied","Data":"db28728cd0cc07478a7c8ce2758f775785bc0fc23adc6d13809039517355127f"} Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.586583 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerDied","Data":"2d7f8ae8cabdbfc2775ae8b50712ab4e471856e17cba63ec56dc52912aa4a832"} Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.589113 4765 generic.go:334] "Generic (PLEG): container finished" podID="9375a48d-3efa-435a-bc05-c344e97943ff" containerID="d365a1fc3f649a27cbb47310882ccb510942f2e48e6df5fa64da72415c895cf7" exitCode=0 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.589138 4765 generic.go:334] "Generic (PLEG): container finished" podID="9375a48d-3efa-435a-bc05-c344e97943ff" containerID="36408eb3cf5a2a3300b72d4e20e962779d4da944dfc3c0a8a6e2a98a8b4261ef" exitCode=2 Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.589157 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerDied","Data":"d365a1fc3f649a27cbb47310882ccb510942f2e48e6df5fa64da72415c895cf7"} Dec 03 21:28:12 crc kubenswrapper[4765]: I1203 21:28:12.589177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerDied","Data":"36408eb3cf5a2a3300b72d4e20e962779d4da944dfc3c0a8a6e2a98a8b4261ef"} Dec 03 21:28:13 crc kubenswrapper[4765]: I1203 21:28:13.609647 4765 generic.go:334] "Generic (PLEG): container finished" podID="9375a48d-3efa-435a-bc05-c344e97943ff" containerID="65946a80c04385d40c2df28026342005506998cf88de180927e9cca2b0c29b45" exitCode=0 Dec 03 21:28:13 crc kubenswrapper[4765]: I1203 21:28:13.609732 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerDied","Data":"65946a80c04385d40c2df28026342005506998cf88de180927e9cca2b0c29b45"} Dec 03 21:28:14 crc kubenswrapper[4765]: I1203 21:28:14.619742 4765 generic.go:334] "Generic (PLEG): container finished" podID="9375a48d-3efa-435a-bc05-c344e97943ff" containerID="9608b9411b03ef15d0cc3502e92caeec8958997628bf51461ffde67a508dcfe3" exitCode=0 Dec 03 21:28:14 crc kubenswrapper[4765]: I1203 21:28:14.619984 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerDied","Data":"9608b9411b03ef15d0cc3502e92caeec8958997628bf51461ffde67a508dcfe3"} Dec 03 21:28:14 crc kubenswrapper[4765]: I1203 21:28:14.670526 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:36516->10.217.0.238:8443: read: connection reset by peer" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.342477 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410163 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410275 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqs86\" (UniqueName: \"kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410398 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410456 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410478 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410500 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410526 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data\") pod \"5f00c2ec-4011-4de9-a016-3eaeff33a391\" (UID: \"5f00c2ec-4011-4de9-a016-3eaeff33a391\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.410748 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs" (OuterVolumeSpecName: "logs") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.411441 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00c2ec-4011-4de9-a016-3eaeff33a391-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.412600 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.427539 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.427602 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts" (OuterVolumeSpecName: "scripts") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.428361 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86" (OuterVolumeSpecName: "kube-api-access-gqs86") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "kube-api-access-gqs86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.492092 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.512469 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqs86\" (UniqueName: \"kubernetes.io/projected/5f00c2ec-4011-4de9-a016-3eaeff33a391-kube-api-access-gqs86\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.512504 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.512513 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00c2ec-4011-4de9-a016-3eaeff33a391-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.512525 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.512534 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.535406 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data" (OuterVolumeSpecName: "config-data") pod "5f00c2ec-4011-4de9-a016-3eaeff33a391" (UID: "5f00c2ec-4011-4de9-a016-3eaeff33a391"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.578025 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616140 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616205 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616281 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9qrn\" (UniqueName: \"kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616338 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616405 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616480 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616518 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.616564 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml\") pod \"9375a48d-3efa-435a-bc05-c344e97943ff\" (UID: \"9375a48d-3efa-435a-bc05-c344e97943ff\") " Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.617033 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00c2ec-4011-4de9-a016-3eaeff33a391-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.619009 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.620896 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.623735 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn" (OuterVolumeSpecName: "kube-api-access-q9qrn") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "kube-api-access-q9qrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.635121 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts" (OuterVolumeSpecName: "scripts") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.663102 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9375a48d-3efa-435a-bc05-c344e97943ff","Type":"ContainerDied","Data":"80fccf6f50f6c91f3805c4a26b11c0ddd4655918f330f38cc462af16df3fd377"} Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.663150 4765 scope.go:117] "RemoveContainer" containerID="d365a1fc3f649a27cbb47310882ccb510942f2e48e6df5fa64da72415c895cf7" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.663274 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.667763 4765 generic.go:334] "Generic (PLEG): container finished" podID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerID="2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635" exitCode=0 Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.667854 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerDied","Data":"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635"} Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.670351 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5f00c2ec-4011-4de9-a016-3eaeff33a391","Type":"ContainerDied","Data":"750d2839db1f78f146c3f9da1c9a3e3219d81f00e655e730315e73630a7026b6"} Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.670424 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.698585 4765 scope.go:117] "RemoveContainer" containerID="36408eb3cf5a2a3300b72d4e20e962779d4da944dfc3c0a8a6e2a98a8b4261ef" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.719429 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.719741 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.720065 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9375a48d-3efa-435a-bc05-c344e97943ff-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.725577 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9qrn\" (UniqueName: \"kubernetes.io/projected/9375a48d-3efa-435a-bc05-c344e97943ff-kube-api-access-q9qrn\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.726964 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.732611 4765 scope.go:117] "RemoveContainer" containerID="9608b9411b03ef15d0cc3502e92caeec8958997628bf51461ffde67a508dcfe3" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.740110 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.750605 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.757852 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758181 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-notification-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758198 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-notification-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758222 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="sg-core" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758229 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="sg-core" Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758235 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="proxy-httpd" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758241 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="proxy-httpd" Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758258 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758265 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api" Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758276 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-central-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758282 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-central-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: E1203 21:28:15.758309 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api-log" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758314 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api-log" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758495 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758506 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" containerName="manila-api-log" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758519 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="proxy-httpd" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758533 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-central-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758545 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="ceilometer-notification-agent" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.758555 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" containerName="sg-core" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.761452 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.764456 4765 scope.go:117] "RemoveContainer" containerID="65946a80c04385d40c2df28026342005506998cf88de180927e9cca2b0c29b45" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.768907 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.769264 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.769478 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.781119 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.794935 4765 scope.go:117] "RemoveContainer" containerID="db28728cd0cc07478a7c8ce2758f775785bc0fc23adc6d13809039517355127f" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.821447 4765 scope.go:117] "RemoveContainer" containerID="2d7f8ae8cabdbfc2775ae8b50712ab4e471856e17cba63ec56dc52912aa4a832" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.824506 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.826999 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data-custom\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827078 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd97dcb-bc57-4867-a85d-be547f7b716f-etc-machine-id\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827161 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njsfl\" (UniqueName: \"kubernetes.io/projected/fdd97dcb-bc57-4867-a85d-be547f7b716f-kube-api-access-njsfl\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827338 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827369 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-public-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827453 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-scripts\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827468 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd97dcb-bc57-4867-a85d-be547f7b716f-logs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827555 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827592 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827882 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.827915 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.843262 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.861134 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data" (OuterVolumeSpecName: "config-data") pod "9375a48d-3efa-435a-bc05-c344e97943ff" (UID: "9375a48d-3efa-435a-bc05-c344e97943ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929525 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-scripts\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929580 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd97dcb-bc57-4867-a85d-be547f7b716f-logs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929623 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929716 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data-custom\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929779 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd97dcb-bc57-4867-a85d-be547f7b716f-etc-machine-id\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njsfl\" (UniqueName: \"kubernetes.io/projected/fdd97dcb-bc57-4867-a85d-be547f7b716f-kube-api-access-njsfl\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929908 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-public-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929927 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.929984 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.930002 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9375a48d-3efa-435a-bc05-c344e97943ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.930455 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd97dcb-bc57-4867-a85d-be547f7b716f-etc-machine-id\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.930758 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd97dcb-bc57-4867-a85d-be547f7b716f-logs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.933983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.935216 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.935573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-scripts\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.935865 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-public-tls-certs\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.935931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-config-data-custom\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.936205 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd97dcb-bc57-4867-a85d-be547f7b716f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:15 crc kubenswrapper[4765]: I1203 21:28:15.946003 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njsfl\" (UniqueName: \"kubernetes.io/projected/fdd97dcb-bc57-4867-a85d-be547f7b716f-kube-api-access-njsfl\") pod \"manila-api-0\" (UID: \"fdd97dcb-bc57-4867-a85d-be547f7b716f\") " pod="openstack/manila-api-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.001420 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.011507 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.025735 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.028744 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.033632 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.033865 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.034060 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.062810 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.095258 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137165 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137232 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137284 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137368 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137671 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137728 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.137775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfjf\" (UniqueName: \"kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.241238 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.241482 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.241558 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.241581 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfjf\" (UniqueName: \"kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.242132 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.242179 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.242224 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.242333 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.242753 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.243490 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.246332 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.246920 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.249056 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.249876 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.258560 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.272475 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfjf\" (UniqueName: \"kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf\") pod \"ceilometer-0\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.351392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.373237 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f00c2ec-4011-4de9-a016-3eaeff33a391" path="/var/lib/kubelet/pods/5f00c2ec-4011-4de9-a016-3eaeff33a391/volumes" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.374102 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9375a48d-3efa-435a-bc05-c344e97943ff" path="/var/lib/kubelet/pods/9375a48d-3efa-435a-bc05-c344e97943ff/volumes" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.734758 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerStarted","Data":"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943"} Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.735065 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerStarted","Data":"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c"} Dec 03 21:28:16 crc kubenswrapper[4765]: W1203 21:28:16.738631 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd97dcb_bc57_4867_a85d_be547f7b716f.slice/crio-c828577378d705353051a71685fc66e050e97ab8fd19572789fac946d7a41953 WatchSource:0}: Error finding container c828577378d705353051a71685fc66e050e97ab8fd19572789fac946d7a41953: Status 404 returned error can't find the container with id c828577378d705353051a71685fc66e050e97ab8fd19572789fac946d7a41953 Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.742595 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.768559 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.422737607 podStartE2EDuration="10.768540216s" podCreationTimestamp="2025-12-03 21:28:06 +0000 UTC" firstStartedPulling="2025-12-03 21:28:07.908841552 +0000 UTC m=+2985.839386703" lastFinishedPulling="2025-12-03 21:28:15.254644161 +0000 UTC m=+2993.185189312" observedRunningTime="2025-12-03 21:28:16.760240703 +0000 UTC m=+2994.690785864" watchObservedRunningTime="2025-12-03 21:28:16.768540216 +0000 UTC m=+2994.699085377" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.812202 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.238:8443: connect: connection refused" Dec 03 21:28:16 crc kubenswrapper[4765]: I1203 21:28:16.820425 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.144398 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.164164 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.399940 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-hv49c" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.477097 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.477282 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="dnsmasq-dns" containerID="cri-o://875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab" gracePeriod=10 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.673122 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.690922 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:28:17 crc kubenswrapper[4765]: E1203 21:28:17.716614 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135f08a_3bde_41df_8f2b_1e910fa18b2d.slice/crio-conmon-33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554f4c9_6d19_4486_a97d_c41f400aedd6.slice/crio-1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554f4c9_6d19_4486_a97d_c41f400aedd6.slice/crio-ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135f08a_3bde_41df_8f2b_1e910fa18b2d.slice/crio-conmon-d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554f4c9_6d19_4486_a97d_c41f400aedd6.slice/crio-conmon-1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554f4c9_6d19_4486_a97d_c41f400aedd6.slice/crio-conmon-ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfd9c4f_37d5_4f3e_a18b_8472892c49e3.slice/crio-conmon-875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135f08a_3bde_41df_8f2b_1e910fa18b2d.slice/crio-33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf135f08a_3bde_41df_8f2b_1e910fa18b2d.slice/crio-d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e.scope\": RecentStats: unable to find data in memory cache]" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.769848 4765 generic.go:334] "Generic (PLEG): container finished" podID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerID="875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab" exitCode=0 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.770159 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" event={"ID":"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3","Type":"ContainerDied","Data":"875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.777349 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fdd97dcb-bc57-4867-a85d-be547f7b716f","Type":"ContainerStarted","Data":"8b7366869eb01138771c11ca443bd011bdb27cb1d9a250afb39cf6f92ddd8b5d"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.777431 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fdd97dcb-bc57-4867-a85d-be547f7b716f","Type":"ContainerStarted","Data":"c828577378d705353051a71685fc66e050e97ab8fd19572789fac946d7a41953"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787679 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data\") pod \"5554f4c9-6d19-4486-a97d-c41f400aedd6\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787762 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key\") pod \"5554f4c9-6d19-4486-a97d-c41f400aedd6\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787799 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts\") pod \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787817 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key\") pod \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787859 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts\") pod \"5554f4c9-6d19-4486-a97d-c41f400aedd6\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787882 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs\") pod \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.787950 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lvj\" (UniqueName: \"kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj\") pod \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.788001 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs\") pod \"5554f4c9-6d19-4486-a97d-c41f400aedd6\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.788067 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data\") pod \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\" (UID: \"f135f08a-3bde-41df-8f2b-1e910fa18b2d\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.788123 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9b8v\" (UniqueName: \"kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v\") pod \"5554f4c9-6d19-4486-a97d-c41f400aedd6\" (UID: \"5554f4c9-6d19-4486-a97d-c41f400aedd6\") " Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.794180 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs" (OuterVolumeSpecName: "logs") pod "5554f4c9-6d19-4486-a97d-c41f400aedd6" (UID: "5554f4c9-6d19-4486-a97d-c41f400aedd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.794645 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs" (OuterVolumeSpecName: "logs") pod "f135f08a-3bde-41df-8f2b-1e910fa18b2d" (UID: "f135f08a-3bde-41df-8f2b-1e910fa18b2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.795015 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v" (OuterVolumeSpecName: "kube-api-access-q9b8v") pod "5554f4c9-6d19-4486-a97d-c41f400aedd6" (UID: "5554f4c9-6d19-4486-a97d-c41f400aedd6"). InnerVolumeSpecName "kube-api-access-q9b8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803773 4765 generic.go:334] "Generic (PLEG): container finished" podID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerID="33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" exitCode=137 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803817 4765 generic.go:334] "Generic (PLEG): container finished" podID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerID="d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" exitCode=137 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803891 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerDied","Data":"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803944 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerDied","Data":"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803960 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9578d667-6257p" event={"ID":"f135f08a-3bde-41df-8f2b-1e910fa18b2d","Type":"ContainerDied","Data":"e7f44c25f4042d19619068c087656061ecc37eea24fecc592970a69522350a41"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.803981 4765 scope.go:117] "RemoveContainer" containerID="33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.804152 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9578d667-6257p" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.806711 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f135f08a-3bde-41df-8f2b-1e910fa18b2d" (UID: "f135f08a-3bde-41df-8f2b-1e910fa18b2d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.807526 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj" (OuterVolumeSpecName: "kube-api-access-h6lvj") pod "f135f08a-3bde-41df-8f2b-1e910fa18b2d" (UID: "f135f08a-3bde-41df-8f2b-1e910fa18b2d"). InnerVolumeSpecName "kube-api-access-h6lvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.819662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5554f4c9-6d19-4486-a97d-c41f400aedd6" (UID: "5554f4c9-6d19-4486-a97d-c41f400aedd6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.820167 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts" (OuterVolumeSpecName: "scripts") pod "5554f4c9-6d19-4486-a97d-c41f400aedd6" (UID: "5554f4c9-6d19-4486-a97d-c41f400aedd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.841702 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerStarted","Data":"263c33db6e3c46a04c2f8890d0ffd5b80cb12c648a96b9838f4773bf467c20c7"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.841745 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerStarted","Data":"72cdca95c7f1454248effdf8dd631a7bfea1e05c9b6d8439c595f5cf1e395006"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.842012 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data" (OuterVolumeSpecName: "config-data") pod "f135f08a-3bde-41df-8f2b-1e910fa18b2d" (UID: "f135f08a-3bde-41df-8f2b-1e910fa18b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.843504 4765 generic.go:334] "Generic (PLEG): container finished" podID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerID="ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" exitCode=137 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.843524 4765 generic.go:334] "Generic (PLEG): container finished" podID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerID="1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" exitCode=137 Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.844549 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f744fb785-k6zt9" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.844762 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerDied","Data":"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.844786 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerDied","Data":"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.844796 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f744fb785-k6zt9" event={"ID":"5554f4c9-6d19-4486-a97d-c41f400aedd6","Type":"ContainerDied","Data":"9162659a4460dc5efea9dcfafd04d8dcf5c6dd5cf7b98756ddd5ebd40aa0f2bb"} Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.860717 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data" (OuterVolumeSpecName: "config-data") pod "5554f4c9-6d19-4486-a97d-c41f400aedd6" (UID: "5554f4c9-6d19-4486-a97d-c41f400aedd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.863576 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts" (OuterVolumeSpecName: "scripts") pod "f135f08a-3bde-41df-8f2b-1e910fa18b2d" (UID: "f135f08a-3bde-41df-8f2b-1e910fa18b2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890645 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5554f4c9-6d19-4486-a97d-c41f400aedd6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890680 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890694 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f135f08a-3bde-41df-8f2b-1e910fa18b2d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890706 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890721 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f135f08a-3bde-41df-8f2b-1e910fa18b2d-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890733 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lvj\" (UniqueName: \"kubernetes.io/projected/f135f08a-3bde-41df-8f2b-1e910fa18b2d-kube-api-access-h6lvj\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890748 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554f4c9-6d19-4486-a97d-c41f400aedd6-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890760 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f135f08a-3bde-41df-8f2b-1e910fa18b2d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890815 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9b8v\" (UniqueName: \"kubernetes.io/projected/5554f4c9-6d19-4486-a97d-c41f400aedd6-kube-api-access-q9b8v\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:17 crc kubenswrapper[4765]: I1203 21:28:17.890828 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5554f4c9-6d19-4486-a97d-c41f400aedd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.084328 4765 scope.go:117] "RemoveContainer" containerID="d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.124973 4765 scope.go:117] "RemoveContainer" containerID="33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" Dec 03 21:28:18 crc kubenswrapper[4765]: E1203 21:28:18.127173 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b\": container with ID starting with 33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b not found: ID does not exist" containerID="33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.127216 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b"} err="failed to get container status \"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b\": rpc error: code = NotFound desc = could not find container \"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b\": container with ID starting with 33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.127241 4765 scope.go:117] "RemoveContainer" containerID="d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.130443 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 21:28:18 crc kubenswrapper[4765]: E1203 21:28:18.130853 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e\": container with ID starting with d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e not found: ID does not exist" containerID="d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.130885 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e"} err="failed to get container status \"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e\": rpc error: code = NotFound desc = could not find container \"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e\": container with ID starting with d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.130910 4765 scope.go:117] "RemoveContainer" containerID="33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.132317 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b"} err="failed to get container status \"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b\": rpc error: code = NotFound desc = could not find container \"33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b\": container with ID starting with 33fb350565bbfe13f26cd62c6427ec7a2d1bc995ad1fe59b460f57b291a6f33b not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.132351 4765 scope.go:117] "RemoveContainer" containerID="d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.133886 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e"} err="failed to get container status \"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e\": rpc error: code = NotFound desc = could not find container \"d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e\": container with ID starting with d6d00b368b289e16959d006e3c501e51bf48baa3f0655c86902dd0417718f14e not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.133924 4765 scope.go:117] "RemoveContainer" containerID="ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198086 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198233 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjxzl\" (UniqueName: \"kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198321 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198396 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.198449 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb\") pod \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\" (UID: \"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3\") " Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.233471 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl" (OuterVolumeSpecName: "kube-api-access-kjxzl") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "kube-api-access-kjxzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.258728 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.296264 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b9578d667-6257p"] Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.300859 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjxzl\" (UniqueName: \"kubernetes.io/projected/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-kube-api-access-kjxzl\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.307858 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.318233 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.332117 4765 scope.go:117] "RemoveContainer" containerID="1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.386782 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" path="/var/lib/kubelet/pods/f135f08a-3bde-41df-8f2b-1e910fa18b2d/volumes" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.393817 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.404562 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.404589 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.418811 4765 scope.go:117] "RemoveContainer" containerID="ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" Dec 03 21:28:18 crc kubenswrapper[4765]: E1203 21:28:18.422433 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643\": container with ID starting with ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643 not found: ID does not exist" containerID="ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.422472 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643"} err="failed to get container status \"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643\": rpc error: code = NotFound desc = could not find container \"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643\": container with ID starting with ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643 not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.422495 4765 scope.go:117] "RemoveContainer" containerID="1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" Dec 03 21:28:18 crc kubenswrapper[4765]: E1203 21:28:18.427441 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b\": container with ID starting with 1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b not found: ID does not exist" containerID="1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.427486 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b"} err="failed to get container status \"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b\": rpc error: code = NotFound desc = could not find container \"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b\": container with ID starting with 1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.427509 4765 scope.go:117] "RemoveContainer" containerID="ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.431446 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643"} err="failed to get container status \"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643\": rpc error: code = NotFound desc = could not find container \"ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643\": container with ID starting with ae27eab504111114a3eec64e214cea878157ca65b2f773a7fbf93f57840ae643 not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.431483 4765 scope.go:117] "RemoveContainer" containerID="1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.436456 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b"} err="failed to get container status \"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b\": rpc error: code = NotFound desc = could not find container \"1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b\": container with ID starting with 1f28d401f3539062e18d6fdb7246b8ca15aaba6899980856a34427ce4f78468b not found: ID does not exist" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.463421 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.484668 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f744fb785-k6zt9"] Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.507598 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.511443 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config" (OuterVolumeSpecName: "config") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.537796 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" (UID: "bbfd9c4f-37d5-4f3e-a18b-8472892c49e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.610062 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-config\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.610089 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.882616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" event={"ID":"bbfd9c4f-37d5-4f3e-a18b-8472892c49e3","Type":"ContainerDied","Data":"c9598956d292d6e696b26aebc6ea24dd64b3673d75d1b81445a399f624e1c431"} Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.884286 4765 scope.go:117] "RemoveContainer" containerID="875526fb4f8c4844a81a92753da14b8ccc0ade9aa3fb20bf4c20d61a248e92ab" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.882729 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-hrfl2" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.892520 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fdd97dcb-bc57-4867-a85d-be547f7b716f","Type":"ContainerStarted","Data":"53d83aa225bf57298bff92d8ab23c0a501cef36cdbc8c10a1ee909766c341ad4"} Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.892784 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.906436 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerStarted","Data":"e55f0f6c453260c1a7cb1f45be4abbec96e81204d637fb7df9153cd579723233"} Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.919315 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.919290065 podStartE2EDuration="3.919290065s" podCreationTimestamp="2025-12-03 21:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:28:18.907360718 +0000 UTC m=+2996.837905869" watchObservedRunningTime="2025-12-03 21:28:18.919290065 +0000 UTC m=+2996.849835216" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.920640 4765 scope.go:117] "RemoveContainer" containerID="c167b0dce1dffdf1342b374ce5265bae2a92610f87f4190cc688d52b7342f55e" Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.939429 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 21:28:18 crc kubenswrapper[4765]: I1203 21:28:18.947262 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-hrfl2"] Dec 03 21:28:19 crc kubenswrapper[4765]: I1203 21:28:19.928682 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerStarted","Data":"3742afa38a0479987d9e3051220b184aa23b0050ffaa1629763fec35a13cf507"} Dec 03 21:28:20 crc kubenswrapper[4765]: I1203 21:28:20.370685 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" path="/var/lib/kubelet/pods/5554f4c9-6d19-4486-a97d-c41f400aedd6/volumes" Dec 03 21:28:20 crc kubenswrapper[4765]: I1203 21:28:20.371452 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" path="/var/lib/kubelet/pods/bbfd9c4f-37d5-4f3e-a18b-8472892c49e3/volumes" Dec 03 21:28:20 crc kubenswrapper[4765]: I1203 21:28:20.941078 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerStarted","Data":"cb9c023316b2d813272aca3a6651ac425985820ec731df02b6344dfe2029d53a"} Dec 03 21:28:20 crc kubenswrapper[4765]: I1203 21:28:20.941576 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 21:28:20 crc kubenswrapper[4765]: I1203 21:28:20.971101 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.130171935 podStartE2EDuration="4.971075106s" podCreationTimestamp="2025-12-03 21:28:16 +0000 UTC" firstStartedPulling="2025-12-03 21:28:16.821996196 +0000 UTC m=+2994.752541347" lastFinishedPulling="2025-12-03 21:28:20.662899327 +0000 UTC m=+2998.593444518" observedRunningTime="2025-12-03 21:28:20.965191305 +0000 UTC m=+2998.895736476" watchObservedRunningTime="2025-12-03 21:28:20.971075106 +0000 UTC m=+2998.901620317" Dec 03 21:28:21 crc kubenswrapper[4765]: I1203 21:28:21.002207 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:22 crc kubenswrapper[4765]: I1203 21:28:22.964863 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-central-agent" containerID="cri-o://263c33db6e3c46a04c2f8890d0ffd5b80cb12c648a96b9838f4773bf467c20c7" gracePeriod=30 Dec 03 21:28:22 crc kubenswrapper[4765]: I1203 21:28:22.964913 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="proxy-httpd" containerID="cri-o://cb9c023316b2d813272aca3a6651ac425985820ec731df02b6344dfe2029d53a" gracePeriod=30 Dec 03 21:28:22 crc kubenswrapper[4765]: I1203 21:28:22.964955 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="sg-core" containerID="cri-o://3742afa38a0479987d9e3051220b184aa23b0050ffaa1629763fec35a13cf507" gracePeriod=30 Dec 03 21:28:22 crc kubenswrapper[4765]: I1203 21:28:22.965028 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-notification-agent" containerID="cri-o://e55f0f6c453260c1a7cb1f45be4abbec96e81204d637fb7df9153cd579723233" gracePeriod=30 Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.978965 4765 generic.go:334] "Generic (PLEG): container finished" podID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerID="cb9c023316b2d813272aca3a6651ac425985820ec731df02b6344dfe2029d53a" exitCode=0 Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979362 4765 generic.go:334] "Generic (PLEG): container finished" podID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerID="3742afa38a0479987d9e3051220b184aa23b0050ffaa1629763fec35a13cf507" exitCode=2 Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979373 4765 generic.go:334] "Generic (PLEG): container finished" podID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerID="e55f0f6c453260c1a7cb1f45be4abbec96e81204d637fb7df9153cd579723233" exitCode=0 Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979384 4765 generic.go:334] "Generic (PLEG): container finished" podID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerID="263c33db6e3c46a04c2f8890d0ffd5b80cb12c648a96b9838f4773bf467c20c7" exitCode=0 Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979096 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerDied","Data":"cb9c023316b2d813272aca3a6651ac425985820ec731df02b6344dfe2029d53a"} Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979427 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerDied","Data":"3742afa38a0479987d9e3051220b184aa23b0050ffaa1629763fec35a13cf507"} Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerDied","Data":"e55f0f6c453260c1a7cb1f45be4abbec96e81204d637fb7df9153cd579723233"} Dec 03 21:28:23 crc kubenswrapper[4765]: I1203 21:28:23.979459 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerDied","Data":"263c33db6e3c46a04c2f8890d0ffd5b80cb12c648a96b9838f4773bf467c20c7"} Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.400939 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.546895 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547044 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547073 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsfjf\" (UniqueName: \"kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547123 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547157 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547236 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547290 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.547379 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data\") pod \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\" (UID: \"b10994b5-2c67-4c43-abfd-ee2bd5e8328f\") " Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.548463 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.549104 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.553765 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf" (OuterVolumeSpecName: "kube-api-access-vsfjf") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "kube-api-access-vsfjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.554608 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts" (OuterVolumeSpecName: "scripts") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.575132 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.605465 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650609 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650634 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsfjf\" (UniqueName: \"kubernetes.io/projected/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-kube-api-access-vsfjf\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650644 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650653 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650661 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.650670 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.667678 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.670983 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data" (OuterVolumeSpecName: "config-data") pod "b10994b5-2c67-4c43-abfd-ee2bd5e8328f" (UID: "b10994b5-2c67-4c43-abfd-ee2bd5e8328f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.752576 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.752605 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10994b5-2c67-4c43-abfd-ee2bd5e8328f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.992378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b10994b5-2c67-4c43-abfd-ee2bd5e8328f","Type":"ContainerDied","Data":"72cdca95c7f1454248effdf8dd631a7bfea1e05c9b6d8439c595f5cf1e395006"} Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.992468 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:24 crc kubenswrapper[4765]: I1203 21:28:24.992481 4765 scope.go:117] "RemoveContainer" containerID="cb9c023316b2d813272aca3a6651ac425985820ec731df02b6344dfe2029d53a" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.029686 4765 scope.go:117] "RemoveContainer" containerID="3742afa38a0479987d9e3051220b184aa23b0050ffaa1629763fec35a13cf507" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.052215 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.060003 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.080522 4765 scope.go:117] "RemoveContainer" containerID="e55f0f6c453260c1a7cb1f45be4abbec96e81204d637fb7df9153cd579723233" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.081177 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.081608 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.081677 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.081737 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.081800 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.081859 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="sg-core" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.081959 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="sg-core" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082029 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="proxy-httpd" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082086 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="proxy-httpd" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082147 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082203 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082260 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-central-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082337 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-central-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082417 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082476 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082534 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="init" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082586 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="init" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082643 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="dnsmasq-dns" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082699 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="dnsmasq-dns" Dec 03 21:28:25 crc kubenswrapper[4765]: E1203 21:28:25.082761 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-notification-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.082817 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-notification-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083026 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083087 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-central-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083148 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfd9c4f-37d5-4f3e-a18b-8472892c49e3" containerName="dnsmasq-dns" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083214 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083281 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5554f4c9-6d19-4486-a97d-c41f400aedd6" containerName="horizon" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083364 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="proxy-httpd" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083433 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="ceilometer-notification-agent" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083493 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f135f08a-3bde-41df-8f2b-1e910fa18b2d" containerName="horizon-log" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.083551 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" containerName="sg-core" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.085813 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.093839 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.094139 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.094388 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.102450 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.109539 4765 scope.go:117] "RemoveContainer" containerID="263c33db6e3c46a04c2f8890d0ffd5b80cb12c648a96b9838f4773bf467c20c7" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.262748 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.262827 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs7r\" (UniqueName: \"kubernetes.io/projected/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-kube-api-access-jcs7r\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.262889 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-config-data\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.262932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-log-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.262970 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-scripts\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.263062 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.263088 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-run-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.263121 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.364809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-log-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.364910 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-scripts\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365032 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-run-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365142 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365276 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365341 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs7r\" (UniqueName: \"kubernetes.io/projected/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-kube-api-access-jcs7r\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.365383 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-config-data\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.366729 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-run-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.366801 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-log-httpd\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.371167 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-config-data\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.374696 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.376179 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.376896 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-scripts\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.385944 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.403917 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs7r\" (UniqueName: \"kubernetes.io/projected/f4c7f313-908a-4e2c-a5a0-3b1626d6e188-kube-api-access-jcs7r\") pod \"ceilometer-0\" (UID: \"f4c7f313-908a-4e2c-a5a0-3b1626d6e188\") " pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.444417 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 21:28:25 crc kubenswrapper[4765]: I1203 21:28:25.936452 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 21:28:26 crc kubenswrapper[4765]: I1203 21:28:26.004411 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"5771c3254257610da108d128b5706e8267614705d2cf7e7b82ec4a25634fb1a5"} Dec 03 21:28:26 crc kubenswrapper[4765]: I1203 21:28:26.384276 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10994b5-2c67-4c43-abfd-ee2bd5e8328f" path="/var/lib/kubelet/pods/b10994b5-2c67-4c43-abfd-ee2bd5e8328f/volumes" Dec 03 21:28:26 crc kubenswrapper[4765]: I1203 21:28:26.810398 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.238:8443: connect: connection refused" Dec 03 21:28:28 crc kubenswrapper[4765]: I1203 21:28:28.037820 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"99e4685f5bcbb791a7206589f2cc4445e69d2424a7c154321180f4ae75a5abfd"} Dec 03 21:28:28 crc kubenswrapper[4765]: I1203 21:28:28.663346 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 03 21:28:28 crc kubenswrapper[4765]: I1203 21:28:28.749388 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:28 crc kubenswrapper[4765]: I1203 21:28:28.780526 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 03 21:28:28 crc kubenswrapper[4765]: I1203 21:28:28.857631 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:29 crc kubenswrapper[4765]: I1203 21:28:29.050869 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="manila-share" containerID="cri-o://b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" gracePeriod=30 Dec 03 21:28:29 crc kubenswrapper[4765]: I1203 21:28:29.051449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"1d975eda1adc6e70f9f3511f2cbbaa38703f53780164c4796501840016474be3"} Dec 03 21:28:29 crc kubenswrapper[4765]: I1203 21:28:29.051712 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="manila-scheduler" containerID="cri-o://c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e" gracePeriod=30 Dec 03 21:28:29 crc kubenswrapper[4765]: I1203 21:28:29.052246 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="probe" containerID="cri-o://d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" gracePeriod=30 Dec 03 21:28:29 crc kubenswrapper[4765]: I1203 21:28:29.052600 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="probe" containerID="cri-o://111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738" gracePeriod=30 Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:29.899740 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.065582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"ce3f3a9ae20b86ecb8cdf75c992fe31359e4eb483671ffadeb6254836a6f9d5c"} Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.069688 4765 generic.go:334] "Generic (PLEG): container finished" podID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerID="d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" exitCode=0 Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.069785 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.069852 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerDied","Data":"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943"} Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.070032 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerDied","Data":"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c"} Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.069790 4765 generic.go:334] "Generic (PLEG): container finished" podID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerID="b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" exitCode=1 Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.070073 4765 scope.go:117] "RemoveContainer" containerID="d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.070190 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2fbe1782-d690-4d54-92a9-94309a01ae5d","Type":"ContainerDied","Data":"bce983a18521d025ae71e3d5d196fe0477cc86d17c779d50ccbcb22f55d6e5c7"} Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.073183 4765 generic.go:334] "Generic (PLEG): container finished" podID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerID="111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738" exitCode=0 Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.073226 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerDied","Data":"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738"} Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.083830 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.083963 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084064 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084124 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084140 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084168 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084194 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084227 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dhfq\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq\") pod \"2fbe1782-d690-4d54-92a9-94309a01ae5d\" (UID: \"2fbe1782-d690-4d54-92a9-94309a01ae5d\") " Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084229 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.084960 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.086106 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.086133 4765 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2fbe1782-d690-4d54-92a9-94309a01ae5d-var-lib-manila\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.099679 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts" (OuterVolumeSpecName: "scripts") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.101927 4765 scope.go:117] "RemoveContainer" containerID="b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.104395 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.104521 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph" (OuterVolumeSpecName: "ceph") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.107392 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq" (OuterVolumeSpecName: "kube-api-access-9dhfq") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "kube-api-access-9dhfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.155958 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.187739 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.187769 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.187784 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dhfq\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-kube-api-access-9dhfq\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.187799 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.187811 4765 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2fbe1782-d690-4d54-92a9-94309a01ae5d-ceph\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.275503 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data" (OuterVolumeSpecName: "config-data") pod "2fbe1782-d690-4d54-92a9-94309a01ae5d" (UID: "2fbe1782-d690-4d54-92a9-94309a01ae5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.290355 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbe1782-d690-4d54-92a9-94309a01ae5d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.304135 4765 scope.go:117] "RemoveContainer" containerID="d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" Dec 03 21:28:30 crc kubenswrapper[4765]: E1203 21:28:30.304974 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943\": container with ID starting with d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943 not found: ID does not exist" containerID="d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305001 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943"} err="failed to get container status \"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943\": rpc error: code = NotFound desc = could not find container \"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943\": container with ID starting with d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943 not found: ID does not exist" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305021 4765 scope.go:117] "RemoveContainer" containerID="b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" Dec 03 21:28:30 crc kubenswrapper[4765]: E1203 21:28:30.305226 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c\": container with ID starting with b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c not found: ID does not exist" containerID="b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305242 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c"} err="failed to get container status \"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c\": rpc error: code = NotFound desc = could not find container \"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c\": container with ID starting with b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c not found: ID does not exist" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305253 4765 scope.go:117] "RemoveContainer" containerID="d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305468 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943"} err="failed to get container status \"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943\": rpc error: code = NotFound desc = could not find container \"d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943\": container with ID starting with d9c48f0596d74ebd9a4904aba7f700b90126b4b7f6e59b61888d4f6f5c0dc943 not found: ID does not exist" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305481 4765 scope.go:117] "RemoveContainer" containerID="b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.305651 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c"} err="failed to get container status \"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c\": rpc error: code = NotFound desc = could not find container \"b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c\": container with ID starting with b7574e644df65c2f84459a186c2414b7918f5874d1ca67184e9396f769c7938c not found: ID does not exist" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.410802 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.427208 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.446929 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:30 crc kubenswrapper[4765]: E1203 21:28:30.447448 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="probe" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.447473 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="probe" Dec 03 21:28:30 crc kubenswrapper[4765]: E1203 21:28:30.447504 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="manila-share" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.447513 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="manila-share" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.447752 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="manila-share" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.447777 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" containerName="probe" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.448800 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.453062 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.469545 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499004 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-scripts\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-ceph\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499437 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499614 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499681 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.499806 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsfm\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-kube-api-access-kjsfm\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602202 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602255 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602313 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsfm\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-kube-api-access-kjsfm\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602402 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-scripts\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602449 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602462 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-ceph\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602484 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.602635 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.603031 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7d62facf-5ee9-45cf-a031-15834157a662-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.606268 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-scripts\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.606454 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.606719 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.607798 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d62facf-5ee9-45cf-a031-15834157a662-config-data\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.610481 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-ceph\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.618572 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsfm\" (UniqueName: \"kubernetes.io/projected/7d62facf-5ee9-45cf-a031-15834157a662-kube-api-access-kjsfm\") pod \"manila-share-share1-0\" (UID: \"7d62facf-5ee9-45cf-a031-15834157a662\") " pod="openstack/manila-share-share1-0" Dec 03 21:28:30 crc kubenswrapper[4765]: I1203 21:28:30.773453 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.086604 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"f737b243dad9eecbb51ca779604877fd73daf13da0c56a882aaa881ba1821f2a"} Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.087161 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.123449 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.907565032 podStartE2EDuration="6.123420763s" podCreationTimestamp="2025-12-03 21:28:25 +0000 UTC" firstStartedPulling="2025-12-03 21:28:25.935466564 +0000 UTC m=+3003.866011755" lastFinishedPulling="2025-12-03 21:28:30.151322335 +0000 UTC m=+3008.081867486" observedRunningTime="2025-12-03 21:28:31.108717806 +0000 UTC m=+3009.039262957" watchObservedRunningTime="2025-12-03 21:28:31.123420763 +0000 UTC m=+3009.053965934" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.502175 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.831436 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951172 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951244 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khrcv\" (UniqueName: \"kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951466 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951489 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951550 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.951569 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom\") pod \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\" (UID: \"67e8a1e2-931d-43b6-98dc-eab619aa3dfc\") " Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.952034 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.956599 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.956727 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts" (OuterVolumeSpecName: "scripts") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:31 crc kubenswrapper[4765]: I1203 21:28:31.958427 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv" (OuterVolumeSpecName: "kube-api-access-khrcv") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "kube-api-access-khrcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.015372 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.054579 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.054609 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.054625 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.054636 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khrcv\" (UniqueName: \"kubernetes.io/projected/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-kube-api-access-khrcv\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.054649 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.076831 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data" (OuterVolumeSpecName: "config-data") pod "67e8a1e2-931d-43b6-98dc-eab619aa3dfc" (UID: "67e8a1e2-931d-43b6-98dc-eab619aa3dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.096377 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7d62facf-5ee9-45cf-a031-15834157a662","Type":"ContainerStarted","Data":"30222b030238bd4e51a110e5113b56cda7ae627a6e9df2b6b0554ca9a6fe1d1a"} Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.098244 4765 generic.go:334] "Generic (PLEG): container finished" podID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerID="c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e" exitCode=0 Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.099476 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.099553 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerDied","Data":"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e"} Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.099622 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"67e8a1e2-931d-43b6-98dc-eab619aa3dfc","Type":"ContainerDied","Data":"bbd126d8ac08ff7bde047e07283e73d71c2bc6611f44663befb8486e18bf7812"} Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.099643 4765 scope.go:117] "RemoveContainer" containerID="111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.159599 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e8a1e2-931d-43b6-98dc-eab619aa3dfc-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.163533 4765 scope.go:117] "RemoveContainer" containerID="c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.173128 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.196531 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.207258 4765 scope.go:117] "RemoveContainer" containerID="111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738" Dec 03 21:28:32 crc kubenswrapper[4765]: E1203 21:28:32.207731 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738\": container with ID starting with 111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738 not found: ID does not exist" containerID="111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.207838 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738"} err="failed to get container status \"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738\": rpc error: code = NotFound desc = could not find container \"111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738\": container with ID starting with 111efdf96eaeda632a5def48f843687a46254218e12ee7031a4a815aad4af738 not found: ID does not exist" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.207931 4765 scope.go:117] "RemoveContainer" containerID="c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e" Dec 03 21:28:32 crc kubenswrapper[4765]: E1203 21:28:32.208393 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e\": container with ID starting with c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e not found: ID does not exist" containerID="c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.208429 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e"} err="failed to get container status \"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e\": rpc error: code = NotFound desc = could not find container \"c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e\": container with ID starting with c11b98b39ac2d5b04f037cc2b883fb49c5242903d7f740438d135dfa306d385e not found: ID does not exist" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.218876 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:32 crc kubenswrapper[4765]: E1203 21:28:32.219379 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="probe" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.219400 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="probe" Dec 03 21:28:32 crc kubenswrapper[4765]: E1203 21:28:32.219428 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="manila-scheduler" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.219446 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="manila-scheduler" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.219692 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="manila-scheduler" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.219719 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" containerName="probe" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.220936 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.225649 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.226944 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261426 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261548 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261582 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-kube-api-access-r4hpg\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261623 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-scripts\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261667 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.261695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363477 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363536 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363818 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363849 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-kube-api-access-r4hpg\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363891 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-scripts\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.363922 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.368480 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-scripts\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.368757 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.368877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.379924 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.385442 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/8dd498cd-6ec2-4d8f-ad18-72aae897e33e-kube-api-access-r4hpg\") pod \"manila-scheduler-0\" (UID: \"8dd498cd-6ec2-4d8f-ad18-72aae897e33e\") " pod="openstack/manila-scheduler-0" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.390127 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbe1782-d690-4d54-92a9-94309a01ae5d" path="/var/lib/kubelet/pods/2fbe1782-d690-4d54-92a9-94309a01ae5d/volumes" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.391613 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e8a1e2-931d-43b6-98dc-eab619aa3dfc" path="/var/lib/kubelet/pods/67e8a1e2-931d-43b6-98dc-eab619aa3dfc/volumes" Dec 03 21:28:32 crc kubenswrapper[4765]: I1203 21:28:32.546604 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 03 21:28:33 crc kubenswrapper[4765]: I1203 21:28:33.098333 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 03 21:28:33 crc kubenswrapper[4765]: I1203 21:28:33.113237 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7d62facf-5ee9-45cf-a031-15834157a662","Type":"ContainerStarted","Data":"27ef8ce83284684f2bd2c9039e5e0d8260db80a5165f72c8e0b41856e3af3be0"} Dec 03 21:28:33 crc kubenswrapper[4765]: I1203 21:28:33.113282 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7d62facf-5ee9-45cf-a031-15834157a662","Type":"ContainerStarted","Data":"139699d92d758bc04df30308215ea7bec5c0c81fd6d9a56ff749118c60ca7034"} Dec 03 21:28:33 crc kubenswrapper[4765]: I1203 21:28:33.151064 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.151047855 podStartE2EDuration="3.151047855s" podCreationTimestamp="2025-12-03 21:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:28:33.1399315 +0000 UTC m=+3011.070476661" watchObservedRunningTime="2025-12-03 21:28:33.151047855 +0000 UTC m=+3011.081593006" Dec 03 21:28:34 crc kubenswrapper[4765]: I1203 21:28:34.151077 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8dd498cd-6ec2-4d8f-ad18-72aae897e33e","Type":"ContainerStarted","Data":"bbacc905d6ce4ed4bef9b11a0585dcb98660c24005a189169f141d5e6e857a23"} Dec 03 21:28:34 crc kubenswrapper[4765]: I1203 21:28:34.152235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8dd498cd-6ec2-4d8f-ad18-72aae897e33e","Type":"ContainerStarted","Data":"9ed37ab16bdba616783d2203fc6fbda83e585846590d5847e1e89db0ac7127a6"} Dec 03 21:28:34 crc kubenswrapper[4765]: I1203 21:28:34.152271 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8dd498cd-6ec2-4d8f-ad18-72aae897e33e","Type":"ContainerStarted","Data":"d9632ee309c50c42bf986c1379ed14a85656edd201f246fa8480b61ed95f123b"} Dec 03 21:28:34 crc kubenswrapper[4765]: I1203 21:28:34.183095 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.183075998 podStartE2EDuration="2.183075998s" podCreationTimestamp="2025-12-03 21:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:28:34.180972805 +0000 UTC m=+3012.111517986" watchObservedRunningTime="2025-12-03 21:28:34.183075998 +0000 UTC m=+3012.113621149" Dec 03 21:28:36 crc kubenswrapper[4765]: I1203 21:28:36.810503 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6c468b5ffd-8p2bd" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.238:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.238:8443: connect: connection refused" Dec 03 21:28:37 crc kubenswrapper[4765]: I1203 21:28:37.386973 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 03 21:28:40 crc kubenswrapper[4765]: I1203 21:28:40.774171 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.807927 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.892908 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.893038 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.893599 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs" (OuterVolumeSpecName: "logs") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894046 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894163 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894228 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894330 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mzjz\" (UniqueName: \"kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894427 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data\") pod \"6a49be96-f6b0-4694-b6d1-24dbaf704602\" (UID: \"6a49be96-f6b0-4694-b6d1-24dbaf704602\") " Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.894948 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a49be96-f6b0-4694-b6d1-24dbaf704602-logs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.899907 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz" (OuterVolumeSpecName: "kube-api-access-8mzjz") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "kube-api-access-8mzjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.917703 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.932251 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts" (OuterVolumeSpecName: "scripts") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.945100 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data" (OuterVolumeSpecName: "config-data") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.948125 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.951230 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6a49be96-f6b0-4694-b6d1-24dbaf704602" (UID: "6a49be96-f6b0-4694-b6d1-24dbaf704602"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997533 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997604 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997634 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997661 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49be96-f6b0-4694-b6d1-24dbaf704602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997684 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a49be96-f6b0-4694-b6d1-24dbaf704602-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:41 crc kubenswrapper[4765]: I1203 21:28:41.997704 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mzjz\" (UniqueName: \"kubernetes.io/projected/6a49be96-f6b0-4694-b6d1-24dbaf704602-kube-api-access-8mzjz\") on node \"crc\" DevicePath \"\"" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.261221 4765 generic.go:334] "Generic (PLEG): container finished" podID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerID="bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26" exitCode=137 Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.261270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerDied","Data":"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26"} Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.261324 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c468b5ffd-8p2bd" event={"ID":"6a49be96-f6b0-4694-b6d1-24dbaf704602","Type":"ContainerDied","Data":"5be1a2176904f3cb5d59d56fbb262443310ad42d8a77312dfe3db6fd0c685e7e"} Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.261346 4765 scope.go:117] "RemoveContainer" containerID="2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.261487 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c468b5ffd-8p2bd" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.315767 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.332738 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c468b5ffd-8p2bd"] Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.383437 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" path="/var/lib/kubelet/pods/6a49be96-f6b0-4694-b6d1-24dbaf704602/volumes" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.509788 4765 scope.go:117] "RemoveContainer" containerID="bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.547225 4765 scope.go:117] "RemoveContainer" containerID="2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.547483 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 03 21:28:42 crc kubenswrapper[4765]: E1203 21:28:42.548117 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635\": container with ID starting with 2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635 not found: ID does not exist" containerID="2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.548183 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635"} err="failed to get container status \"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635\": rpc error: code = NotFound desc = could not find container \"2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635\": container with ID starting with 2951e1ed960acd646d57ffd7b665b1337358433261b79ab45930276b110fb635 not found: ID does not exist" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.548227 4765 scope.go:117] "RemoveContainer" containerID="bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26" Dec 03 21:28:42 crc kubenswrapper[4765]: E1203 21:28:42.548757 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26\": container with ID starting with bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26 not found: ID does not exist" containerID="bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26" Dec 03 21:28:42 crc kubenswrapper[4765]: I1203 21:28:42.548796 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26"} err="failed to get container status \"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26\": rpc error: code = NotFound desc = could not find container \"bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26\": container with ID starting with bdbdf53d56f4ffd66f5a7068dc5e7b6065b43d0e7e8016a09a55dd4a6e587e26 not found: ID does not exist" Dec 03 21:28:52 crc kubenswrapper[4765]: I1203 21:28:52.284808 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 03 21:28:54 crc kubenswrapper[4765]: I1203 21:28:54.246828 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 03 21:28:55 crc kubenswrapper[4765]: I1203 21:28:55.455958 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 21:29:15 crc kubenswrapper[4765]: E1203 21:29:15.447674 4765 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.65:51698->38.102.83.65:33367: write tcp 38.102.83.65:51698->38.102.83.65:33367: write: broken pipe Dec 03 21:29:24 crc kubenswrapper[4765]: I1203 21:29:24.798337 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:29:24 crc kubenswrapper[4765]: I1203 21:29:24.799007 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.974324 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 21:29:52 crc kubenswrapper[4765]: E1203 21:29:52.976120 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.976215 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" Dec 03 21:29:52 crc kubenswrapper[4765]: E1203 21:29:52.976324 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon-log" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.976402 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon-log" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.976692 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon-log" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.976777 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49be96-f6b0-4694-b6d1-24dbaf704602" containerName="horizon" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.977656 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.981823 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.981947 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.982041 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-n4r4n" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.982129 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 03 21:29:52 crc kubenswrapper[4765]: I1203 21:29:52.986683 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143436 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143539 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hg7r\" (UniqueName: \"kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143694 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143721 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143745 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.143826 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245184 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245260 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245367 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245394 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hg7r\" (UniqueName: \"kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245592 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245618 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.245651 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.246377 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.246652 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.246652 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.247274 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.248294 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.253808 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.254133 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.254671 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.270222 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hg7r\" (UniqueName: \"kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.296156 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.313501 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 21:29:53 crc kubenswrapper[4765]: I1203 21:29:53.772443 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 21:29:54 crc kubenswrapper[4765]: I1203 21:29:54.053246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4425100-38b1-43b3-90ba-8691dcf4d4aa","Type":"ContainerStarted","Data":"ec03530d1de7d54c45213baf88d28a73f3166401f79e102f623def41197a9bf7"} Dec 03 21:29:54 crc kubenswrapper[4765]: I1203 21:29:54.798240 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:29:54 crc kubenswrapper[4765]: I1203 21:29:54.798640 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.156161 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps"] Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.158184 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.159846 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.160173 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.180845 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps"] Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.201539 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.201605 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtwf\" (UniqueName: \"kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.201682 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.303628 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.303750 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.303796 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtwf\" (UniqueName: \"kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.304848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.310364 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.319924 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtwf\" (UniqueName: \"kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf\") pod \"collect-profiles-29413290-cjlps\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:00 crc kubenswrapper[4765]: I1203 21:30:00.485664 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:01 crc kubenswrapper[4765]: I1203 21:30:01.394782 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps"] Dec 03 21:30:02 crc kubenswrapper[4765]: I1203 21:30:02.152027 4765 generic.go:334] "Generic (PLEG): container finished" podID="ceca3e3e-53c6-405f-85a7-1b7640695481" containerID="f65bd27872daec3bbd5c6483535760e429b1143a0484e8051f5278cc697c12e5" exitCode=0 Dec 03 21:30:02 crc kubenswrapper[4765]: I1203 21:30:02.152137 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" event={"ID":"ceca3e3e-53c6-405f-85a7-1b7640695481","Type":"ContainerDied","Data":"f65bd27872daec3bbd5c6483535760e429b1143a0484e8051f5278cc697c12e5"} Dec 03 21:30:02 crc kubenswrapper[4765]: I1203 21:30:02.152536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" event={"ID":"ceca3e3e-53c6-405f-85a7-1b7640695481","Type":"ContainerStarted","Data":"8b75f6ce2360d99d0981924ca14b762c41d21576a83ae9bc1ae11d7e0ffb6eb2"} Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.555900 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.671766 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdtwf\" (UniqueName: \"kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf\") pod \"ceca3e3e-53c6-405f-85a7-1b7640695481\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.672059 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume\") pod \"ceca3e3e-53c6-405f-85a7-1b7640695481\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.672269 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume\") pod \"ceca3e3e-53c6-405f-85a7-1b7640695481\" (UID: \"ceca3e3e-53c6-405f-85a7-1b7640695481\") " Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.675009 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume" (OuterVolumeSpecName: "config-volume") pod "ceca3e3e-53c6-405f-85a7-1b7640695481" (UID: "ceca3e3e-53c6-405f-85a7-1b7640695481"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.679291 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf" (OuterVolumeSpecName: "kube-api-access-wdtwf") pod "ceca3e3e-53c6-405f-85a7-1b7640695481" (UID: "ceca3e3e-53c6-405f-85a7-1b7640695481"). InnerVolumeSpecName "kube-api-access-wdtwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.679921 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ceca3e3e-53c6-405f-85a7-1b7640695481" (UID: "ceca3e3e-53c6-405f-85a7-1b7640695481"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.775485 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdtwf\" (UniqueName: \"kubernetes.io/projected/ceca3e3e-53c6-405f-85a7-1b7640695481-kube-api-access-wdtwf\") on node \"crc\" DevicePath \"\"" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.775519 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceca3e3e-53c6-405f-85a7-1b7640695481-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:30:03 crc kubenswrapper[4765]: I1203 21:30:03.775528 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceca3e3e-53c6-405f-85a7-1b7640695481-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:30:04 crc kubenswrapper[4765]: I1203 21:30:04.176652 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" event={"ID":"ceca3e3e-53c6-405f-85a7-1b7640695481","Type":"ContainerDied","Data":"8b75f6ce2360d99d0981924ca14b762c41d21576a83ae9bc1ae11d7e0ffb6eb2"} Dec 03 21:30:04 crc kubenswrapper[4765]: I1203 21:30:04.176750 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b75f6ce2360d99d0981924ca14b762c41d21576a83ae9bc1ae11d7e0ffb6eb2" Dec 03 21:30:04 crc kubenswrapper[4765]: I1203 21:30:04.176912 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413290-cjlps" Dec 03 21:30:04 crc kubenswrapper[4765]: I1203 21:30:04.620394 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd"] Dec 03 21:30:04 crc kubenswrapper[4765]: I1203 21:30:04.629715 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413245-n5gdd"] Dec 03 21:30:06 crc kubenswrapper[4765]: I1203 21:30:06.371170 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c59fd5-d633-4b31-b5fd-7171033bc0de" path="/var/lib/kubelet/pods/93c59fd5-d633-4b31-b5fd-7171033bc0de/volumes" Dec 03 21:30:22 crc kubenswrapper[4765]: E1203 21:30:22.759286 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 03 21:30:22 crc kubenswrapper[4765]: E1203 21:30:22.760101 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hg7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a4425100-38b1-43b3-90ba-8691dcf4d4aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 21:30:22 crc kubenswrapper[4765]: E1203 21:30:22.761735 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" Dec 03 21:30:23 crc kubenswrapper[4765]: E1203 21:30:23.416970 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" Dec 03 21:30:24 crc kubenswrapper[4765]: I1203 21:30:24.798200 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:30:24 crc kubenswrapper[4765]: I1203 21:30:24.799235 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:30:24 crc kubenswrapper[4765]: I1203 21:30:24.799362 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:30:24 crc kubenswrapper[4765]: I1203 21:30:24.800450 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:30:24 crc kubenswrapper[4765]: I1203 21:30:24.800583 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" gracePeriod=600 Dec 03 21:30:24 crc kubenswrapper[4765]: E1203 21:30:24.935543 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:30:25 crc kubenswrapper[4765]: I1203 21:30:25.439578 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" exitCode=0 Dec 03 21:30:25 crc kubenswrapper[4765]: I1203 21:30:25.439624 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de"} Dec 03 21:30:25 crc kubenswrapper[4765]: I1203 21:30:25.439685 4765 scope.go:117] "RemoveContainer" containerID="41d538dd921f81e82465b8f23aeafdb03ce9f27f87159160ebeb5c02c6c079b1" Dec 03 21:30:25 crc kubenswrapper[4765]: I1203 21:30:25.440523 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:30:25 crc kubenswrapper[4765]: E1203 21:30:25.441111 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:30:30 crc kubenswrapper[4765]: I1203 21:30:30.958134 4765 scope.go:117] "RemoveContainer" containerID="e2f09819e7bf142fca812c117fd1c6dc7458e7c5bfb4ce9453f2e87ffc1acd57" Dec 03 21:30:36 crc kubenswrapper[4765]: I1203 21:30:36.360630 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:30:36 crc kubenswrapper[4765]: E1203 21:30:36.361628 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:30:38 crc kubenswrapper[4765]: I1203 21:30:38.828179 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 21:30:40 crc kubenswrapper[4765]: I1203 21:30:40.628965 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4425100-38b1-43b3-90ba-8691dcf4d4aa","Type":"ContainerStarted","Data":"63b2e5c4092441bcbfbcb875cab522208ffbdd671406fc78fa1a62e3238c1538"} Dec 03 21:30:40 crc kubenswrapper[4765]: I1203 21:30:40.658659 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.617187328 podStartE2EDuration="49.658642412s" podCreationTimestamp="2025-12-03 21:29:51 +0000 UTC" firstStartedPulling="2025-12-03 21:29:53.7830718 +0000 UTC m=+3091.713616971" lastFinishedPulling="2025-12-03 21:30:38.824526894 +0000 UTC m=+3136.755072055" observedRunningTime="2025-12-03 21:30:40.651020727 +0000 UTC m=+3138.581565948" watchObservedRunningTime="2025-12-03 21:30:40.658642412 +0000 UTC m=+3138.589187563" Dec 03 21:30:48 crc kubenswrapper[4765]: I1203 21:30:48.360284 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:30:48 crc kubenswrapper[4765]: E1203 21:30:48.361134 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:31:03 crc kubenswrapper[4765]: I1203 21:31:03.360145 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:31:03 crc kubenswrapper[4765]: E1203 21:31:03.361896 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.152941 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvpff"] Dec 03 21:31:12 crc kubenswrapper[4765]: E1203 21:31:12.154157 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceca3e3e-53c6-405f-85a7-1b7640695481" containerName="collect-profiles" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.154264 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceca3e3e-53c6-405f-85a7-1b7640695481" containerName="collect-profiles" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.154502 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceca3e3e-53c6-405f-85a7-1b7640695481" containerName="collect-profiles" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.156295 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.173408 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvpff"] Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.253348 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-catalog-content\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.253426 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttx9\" (UniqueName: \"kubernetes.io/projected/d900a1a5-3df1-4443-a451-301f156d5c07-kube-api-access-vttx9\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.253655 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-utilities\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.354980 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-catalog-content\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.355052 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vttx9\" (UniqueName: \"kubernetes.io/projected/d900a1a5-3df1-4443-a451-301f156d5c07-kube-api-access-vttx9\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.355212 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-utilities\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.355527 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-catalog-content\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.355659 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d900a1a5-3df1-4443-a451-301f156d5c07-utilities\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.377002 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vttx9\" (UniqueName: \"kubernetes.io/projected/d900a1a5-3df1-4443-a451-301f156d5c07-kube-api-access-vttx9\") pod \"certified-operators-mvpff\" (UID: \"d900a1a5-3df1-4443-a451-301f156d5c07\") " pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:12 crc kubenswrapper[4765]: I1203 21:31:12.508559 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:13 crc kubenswrapper[4765]: I1203 21:31:13.087760 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvpff"] Dec 03 21:31:14 crc kubenswrapper[4765]: I1203 21:31:14.019859 4765 generic.go:334] "Generic (PLEG): container finished" podID="d900a1a5-3df1-4443-a451-301f156d5c07" containerID="de203a44c45ee5babf78e56d4c647222cf6c40857883ff05e71a93b5629dd2c6" exitCode=0 Dec 03 21:31:14 crc kubenswrapper[4765]: I1203 21:31:14.019940 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvpff" event={"ID":"d900a1a5-3df1-4443-a451-301f156d5c07","Type":"ContainerDied","Data":"de203a44c45ee5babf78e56d4c647222cf6c40857883ff05e71a93b5629dd2c6"} Dec 03 21:31:14 crc kubenswrapper[4765]: I1203 21:31:14.020417 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvpff" event={"ID":"d900a1a5-3df1-4443-a451-301f156d5c07","Type":"ContainerStarted","Data":"49dec6c4063429b9c6ee9c7441932b8b71378b2949146ebc6c3d181cc5859957"} Dec 03 21:31:16 crc kubenswrapper[4765]: I1203 21:31:16.360550 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:31:16 crc kubenswrapper[4765]: E1203 21:31:16.361232 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:31:19 crc kubenswrapper[4765]: I1203 21:31:19.074199 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvpff" event={"ID":"d900a1a5-3df1-4443-a451-301f156d5c07","Type":"ContainerStarted","Data":"5966a2b91c534dec0add511d943b6b47fa8a188775dd0ba68c4d94afbf049d08"} Dec 03 21:31:20 crc kubenswrapper[4765]: I1203 21:31:20.089128 4765 generic.go:334] "Generic (PLEG): container finished" podID="d900a1a5-3df1-4443-a451-301f156d5c07" containerID="5966a2b91c534dec0add511d943b6b47fa8a188775dd0ba68c4d94afbf049d08" exitCode=0 Dec 03 21:31:20 crc kubenswrapper[4765]: I1203 21:31:20.089229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvpff" event={"ID":"d900a1a5-3df1-4443-a451-301f156d5c07","Type":"ContainerDied","Data":"5966a2b91c534dec0add511d943b6b47fa8a188775dd0ba68c4d94afbf049d08"} Dec 03 21:31:22 crc kubenswrapper[4765]: I1203 21:31:22.113273 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvpff" event={"ID":"d900a1a5-3df1-4443-a451-301f156d5c07","Type":"ContainerStarted","Data":"81eba3a95c4de79f4202e1fcf54ac4adb5c38f05e2f8fee2990cc279490bbe6e"} Dec 03 21:31:22 crc kubenswrapper[4765]: I1203 21:31:22.136074 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvpff" podStartSLOduration=2.974693326 podStartE2EDuration="10.136056613s" podCreationTimestamp="2025-12-03 21:31:12 +0000 UTC" firstStartedPulling="2025-12-03 21:31:14.022574168 +0000 UTC m=+3171.953119319" lastFinishedPulling="2025-12-03 21:31:21.183937425 +0000 UTC m=+3179.114482606" observedRunningTime="2025-12-03 21:31:22.131839589 +0000 UTC m=+3180.062384740" watchObservedRunningTime="2025-12-03 21:31:22.136056613 +0000 UTC m=+3180.066601764" Dec 03 21:31:22 crc kubenswrapper[4765]: I1203 21:31:22.509102 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:22 crc kubenswrapper[4765]: I1203 21:31:22.509252 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:23 crc kubenswrapper[4765]: I1203 21:31:23.568685 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mvpff" podUID="d900a1a5-3df1-4443-a451-301f156d5c07" containerName="registry-server" probeResult="failure" output=< Dec 03 21:31:23 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:31:23 crc kubenswrapper[4765]: > Dec 03 21:31:31 crc kubenswrapper[4765]: I1203 21:31:31.363575 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:31:31 crc kubenswrapper[4765]: E1203 21:31:31.364201 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:31:32 crc kubenswrapper[4765]: I1203 21:31:32.558907 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:32 crc kubenswrapper[4765]: I1203 21:31:32.623533 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvpff" Dec 03 21:31:32 crc kubenswrapper[4765]: I1203 21:31:32.703636 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvpff"] Dec 03 21:31:32 crc kubenswrapper[4765]: I1203 21:31:32.806091 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 21:31:32 crc kubenswrapper[4765]: I1203 21:31:32.806318 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jz5ss" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="registry-server" containerID="cri-o://c8d36e54bc750618285e16dd87e98a3ec63eb84b85768d72e428c6c03c5a0db3" gracePeriod=2 Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.004328 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.004606 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pjw7v" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="registry-server" containerID="cri-o://b3f3698de39b006f5317bfacb97417c8eeb92f91ff764b708b4903ea23106f6d" gracePeriod=2 Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.280339 4765 generic.go:334] "Generic (PLEG): container finished" podID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerID="b3f3698de39b006f5317bfacb97417c8eeb92f91ff764b708b4903ea23106f6d" exitCode=0 Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.280529 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerDied","Data":"b3f3698de39b006f5317bfacb97417c8eeb92f91ff764b708b4903ea23106f6d"} Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.283964 4765 generic.go:334] "Generic (PLEG): container finished" podID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerID="c8d36e54bc750618285e16dd87e98a3ec63eb84b85768d72e428c6c03c5a0db3" exitCode=0 Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.284073 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerDied","Data":"c8d36e54bc750618285e16dd87e98a3ec63eb84b85768d72e428c6c03c5a0db3"} Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.516525 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.724407 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.726269 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities\") pod \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.726387 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content\") pod \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.726488 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g64g2\" (UniqueName: \"kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2\") pod \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\" (UID: \"fabbb260-e586-47ea-99a9-d34da1d9d2b9\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.726893 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities" (OuterVolumeSpecName: "utilities") pod "fabbb260-e586-47ea-99a9-d34da1d9d2b9" (UID: "fabbb260-e586-47ea-99a9-d34da1d9d2b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.732240 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2" (OuterVolumeSpecName: "kube-api-access-g64g2") pod "fabbb260-e586-47ea-99a9-d34da1d9d2b9" (UID: "fabbb260-e586-47ea-99a9-d34da1d9d2b9"). InnerVolumeSpecName "kube-api-access-g64g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.774228 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabbb260-e586-47ea-99a9-d34da1d9d2b9" (UID: "fabbb260-e586-47ea-99a9-d34da1d9d2b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.831426 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content\") pod \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.831526 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities\") pod \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.831610 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzk44\" (UniqueName: \"kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44\") pod \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\" (UID: \"711b4e95-ecdb-4d3b-9bd9-7a1473108d42\") " Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.832476 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities" (OuterVolumeSpecName: "utilities") pod "711b4e95-ecdb-4d3b-9bd9-7a1473108d42" (UID: "711b4e95-ecdb-4d3b-9bd9-7a1473108d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.832581 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.832595 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbb260-e586-47ea-99a9-d34da1d9d2b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.832607 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g64g2\" (UniqueName: \"kubernetes.io/projected/fabbb260-e586-47ea-99a9-d34da1d9d2b9-kube-api-access-g64g2\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.835510 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44" (OuterVolumeSpecName: "kube-api-access-kzk44") pod "711b4e95-ecdb-4d3b-9bd9-7a1473108d42" (UID: "711b4e95-ecdb-4d3b-9bd9-7a1473108d42"). InnerVolumeSpecName "kube-api-access-kzk44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.877753 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "711b4e95-ecdb-4d3b-9bd9-7a1473108d42" (UID: "711b4e95-ecdb-4d3b-9bd9-7a1473108d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.934954 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.935007 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:33 crc kubenswrapper[4765]: I1203 21:31:33.935026 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzk44\" (UniqueName: \"kubernetes.io/projected/711b4e95-ecdb-4d3b-9bd9-7a1473108d42-kube-api-access-kzk44\") on node \"crc\" DevicePath \"\"" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.295601 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jz5ss" event={"ID":"fabbb260-e586-47ea-99a9-d34da1d9d2b9","Type":"ContainerDied","Data":"20f820d8261156654a26cf5d922329aaf3777d5b5c878b09428e23a14fa9954e"} Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.299234 4765 scope.go:117] "RemoveContainer" containerID="c8d36e54bc750618285e16dd87e98a3ec63eb84b85768d72e428c6c03c5a0db3" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.295693 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jz5ss" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.309572 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjw7v" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.310058 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjw7v" event={"ID":"711b4e95-ecdb-4d3b-9bd9-7a1473108d42","Type":"ContainerDied","Data":"80bba44cfab03665bed0761b8bb67bff977e9dc5bc2958dd18ae88d397f9b43e"} Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.332455 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.338603 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jz5ss"] Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.351660 4765 scope.go:117] "RemoveContainer" containerID="6b24227854b93c8211334a56d6ec672e62c53480f81d7054406d03370e04efdd" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.381337 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" path="/var/lib/kubelet/pods/fabbb260-e586-47ea-99a9-d34da1d9d2b9/volumes" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.381965 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.381990 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pjw7v"] Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.393600 4765 scope.go:117] "RemoveContainer" containerID="0adfdccf8e956ceeaf48f57c7be62596152aa5df57bbbbb4424584b5f0075c00" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.424592 4765 scope.go:117] "RemoveContainer" containerID="b3f3698de39b006f5317bfacb97417c8eeb92f91ff764b708b4903ea23106f6d" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.473067 4765 scope.go:117] "RemoveContainer" containerID="ad5f380e91095a9f7b04ab9f3fb538447d74bc055f831b6ff6fba565fbae825a" Dec 03 21:31:34 crc kubenswrapper[4765]: I1203 21:31:34.491973 4765 scope.go:117] "RemoveContainer" containerID="50c13a52e0ae690881e532ab74950f1507f7b903bf9d58b795d4773be92dcb89" Dec 03 21:31:36 crc kubenswrapper[4765]: I1203 21:31:36.381851 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" path="/var/lib/kubelet/pods/711b4e95-ecdb-4d3b-9bd9-7a1473108d42/volumes" Dec 03 21:31:46 crc kubenswrapper[4765]: I1203 21:31:46.360241 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:31:46 crc kubenswrapper[4765]: E1203 21:31:46.361216 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:31:57 crc kubenswrapper[4765]: I1203 21:31:57.360132 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:31:57 crc kubenswrapper[4765]: E1203 21:31:57.361362 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:32:09 crc kubenswrapper[4765]: I1203 21:32:09.359619 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:32:09 crc kubenswrapper[4765]: E1203 21:32:09.360446 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:32:21 crc kubenswrapper[4765]: I1203 21:32:21.360811 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:32:21 crc kubenswrapper[4765]: E1203 21:32:21.362079 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:32:32 crc kubenswrapper[4765]: I1203 21:32:32.373406 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:32:32 crc kubenswrapper[4765]: E1203 21:32:32.374530 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:32:46 crc kubenswrapper[4765]: I1203 21:32:46.361379 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:32:46 crc kubenswrapper[4765]: E1203 21:32:46.362184 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:32:58 crc kubenswrapper[4765]: I1203 21:32:58.361365 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:32:58 crc kubenswrapper[4765]: E1203 21:32:58.362364 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:33:10 crc kubenswrapper[4765]: I1203 21:33:10.361249 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:33:10 crc kubenswrapper[4765]: E1203 21:33:10.362386 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:33:17 crc kubenswrapper[4765]: I1203 21:33:17.455354 4765 generic.go:334] "Generic (PLEG): container finished" podID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" containerID="63b2e5c4092441bcbfbcb875cab522208ffbdd671406fc78fa1a62e3238c1538" exitCode=0 Dec 03 21:33:17 crc kubenswrapper[4765]: I1203 21:33:17.455438 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4425100-38b1-43b3-90ba-8691dcf4d4aa","Type":"ContainerDied","Data":"63b2e5c4092441bcbfbcb875cab522208ffbdd671406fc78fa1a62e3238c1538"} Dec 03 21:33:18 crc kubenswrapper[4765]: I1203 21:33:18.934998 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001728 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001845 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001880 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001912 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001953 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.001978 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.002081 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hg7r\" (UniqueName: \"kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.002111 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.002131 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs\") pod \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\" (UID: \"a4425100-38b1-43b3-90ba-8691dcf4d4aa\") " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.002680 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.003030 4765 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.003421 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data" (OuterVolumeSpecName: "config-data") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.007439 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.007578 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r" (OuterVolumeSpecName: "kube-api-access-8hg7r") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "kube-api-access-8hg7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.012320 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.033471 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.040617 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.057600 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.069818 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a4425100-38b1-43b3-90ba-8691dcf4d4aa" (UID: "a4425100-38b1-43b3-90ba-8691dcf4d4aa"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.104851 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105409 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105542 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105625 4765 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4425100-38b1-43b3-90ba-8691dcf4d4aa-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105702 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105778 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hg7r\" (UniqueName: \"kubernetes.io/projected/a4425100-38b1-43b3-90ba-8691dcf4d4aa-kube-api-access-8hg7r\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105850 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4425100-38b1-43b3-90ba-8691dcf4d4aa-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.105928 4765 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4425100-38b1-43b3-90ba-8691dcf4d4aa-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.124652 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.207569 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.478692 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4425100-38b1-43b3-90ba-8691dcf4d4aa","Type":"ContainerDied","Data":"ec03530d1de7d54c45213baf88d28a73f3166401f79e102f623def41197a9bf7"} Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.478749 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec03530d1de7d54c45213baf88d28a73f3166401f79e102f623def41197a9bf7" Dec 03 21:33:19 crc kubenswrapper[4765]: I1203 21:33:19.478800 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 21:33:24 crc kubenswrapper[4765]: I1203 21:33:24.360467 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:33:24 crc kubenswrapper[4765]: E1203 21:33:24.361490 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.282726 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284255 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284288 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284386 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="extract-utilities" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284407 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="extract-utilities" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284446 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="extract-content" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284463 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="extract-content" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284501 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="extract-content" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284517 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="extract-content" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284546 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284563 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284591 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="extract-utilities" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284607 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="extract-utilities" Dec 03 21:33:29 crc kubenswrapper[4765]: E1203 21:33:29.284663 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" containerName="tempest-tests-tempest-tests-runner" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.284681 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" containerName="tempest-tests-tempest-tests-runner" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.285086 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabbb260-e586-47ea-99a9-d34da1d9d2b9" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.285150 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4425100-38b1-43b3-90ba-8691dcf4d4aa" containerName="tempest-tests-tempest-tests-runner" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.285184 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="711b4e95-ecdb-4d3b-9bd9-7a1473108d42" containerName="registry-server" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.286654 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.289428 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-n4r4n" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.298455 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.446650 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.447004 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxb8s\" (UniqueName: \"kubernetes.io/projected/14c03184-5d99-4a39-99ba-605dd4c44040-kube-api-access-dxb8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.548468 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxb8s\" (UniqueName: \"kubernetes.io/projected/14c03184-5d99-4a39-99ba-605dd4c44040-kube-api-access-dxb8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.548577 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.548936 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.573572 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxb8s\" (UniqueName: \"kubernetes.io/projected/14c03184-5d99-4a39-99ba-605dd4c44040-kube-api-access-dxb8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.593343 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"14c03184-5d99-4a39-99ba-605dd4c44040\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:29 crc kubenswrapper[4765]: I1203 21:33:29.613308 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 21:33:30 crc kubenswrapper[4765]: I1203 21:33:30.133928 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 21:33:30 crc kubenswrapper[4765]: I1203 21:33:30.136917 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:33:30 crc kubenswrapper[4765]: I1203 21:33:30.609723 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"14c03184-5d99-4a39-99ba-605dd4c44040","Type":"ContainerStarted","Data":"744b424fc1b45372702b13ee8e9641736783906e30acb71ca16aa5dece551132"} Dec 03 21:33:31 crc kubenswrapper[4765]: I1203 21:33:31.627362 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"14c03184-5d99-4a39-99ba-605dd4c44040","Type":"ContainerStarted","Data":"b9fea3f9c08de7273cb94d292185cee158f140f06748d8b0aec239fb2aeea6ec"} Dec 03 21:33:31 crc kubenswrapper[4765]: I1203 21:33:31.659504 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.843699127 podStartE2EDuration="2.659475281s" podCreationTimestamp="2025-12-03 21:33:29 +0000 UTC" firstStartedPulling="2025-12-03 21:33:30.136700416 +0000 UTC m=+3308.067245567" lastFinishedPulling="2025-12-03 21:33:30.95247657 +0000 UTC m=+3308.883021721" observedRunningTime="2025-12-03 21:33:31.647538745 +0000 UTC m=+3309.578083926" watchObservedRunningTime="2025-12-03 21:33:31.659475281 +0000 UTC m=+3309.590020482" Dec 03 21:33:36 crc kubenswrapper[4765]: I1203 21:33:36.360151 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:33:36 crc kubenswrapper[4765]: E1203 21:33:36.361576 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:33:47 crc kubenswrapper[4765]: I1203 21:33:47.360866 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:33:47 crc kubenswrapper[4765]: E1203 21:33:47.362104 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.251924 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wnz7b/must-gather-58cn4"] Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.254671 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.260494 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wnz7b"/"openshift-service-ca.crt" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.260672 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wnz7b"/"kube-root-ca.crt" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.260786 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wnz7b"/"default-dockercfg-rz79j" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.278330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.278592 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcwjz\" (UniqueName: \"kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.285400 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wnz7b/must-gather-58cn4"] Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.380426 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcwjz\" (UniqueName: \"kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.380734 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.381677 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.398702 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcwjz\" (UniqueName: \"kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz\") pod \"must-gather-58cn4\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:55 crc kubenswrapper[4765]: I1203 21:33:55.571764 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:33:56 crc kubenswrapper[4765]: I1203 21:33:56.014784 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wnz7b/must-gather-58cn4"] Dec 03 21:33:56 crc kubenswrapper[4765]: I1203 21:33:56.916065 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/must-gather-58cn4" event={"ID":"fc309a07-23e2-493e-870b-d3fa60428deb","Type":"ContainerStarted","Data":"203ddd3690b6c9a0d38b47aed087a4e55f083c1b23a14afb804e064dfb1e4f94"} Dec 03 21:34:00 crc kubenswrapper[4765]: I1203 21:34:00.959030 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/must-gather-58cn4" event={"ID":"fc309a07-23e2-493e-870b-d3fa60428deb","Type":"ContainerStarted","Data":"951abac239f4b50592ce4760c2ec273834bda22eb09eb903b8419d3c9731055d"} Dec 03 21:34:01 crc kubenswrapper[4765]: I1203 21:34:01.979567 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/must-gather-58cn4" event={"ID":"fc309a07-23e2-493e-870b-d3fa60428deb","Type":"ContainerStarted","Data":"81aafe1535c511c1a9e26e21f5f0e9bc81380da05b7421cc0c465fc4227eef65"} Dec 03 21:34:02 crc kubenswrapper[4765]: I1203 21:34:02.011251 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wnz7b/must-gather-58cn4" podStartSLOduration=2.8511213 podStartE2EDuration="7.011220075s" podCreationTimestamp="2025-12-03 21:33:55 +0000 UTC" firstStartedPulling="2025-12-03 21:33:56.015480112 +0000 UTC m=+3333.946025263" lastFinishedPulling="2025-12-03 21:34:00.175578887 +0000 UTC m=+3338.106124038" observedRunningTime="2025-12-03 21:34:02.003690795 +0000 UTC m=+3339.934235986" watchObservedRunningTime="2025-12-03 21:34:02.011220075 +0000 UTC m=+3339.941765256" Dec 03 21:34:02 crc kubenswrapper[4765]: I1203 21:34:02.366194 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:34:02 crc kubenswrapper[4765]: E1203 21:34:02.366586 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:34:03 crc kubenswrapper[4765]: E1203 21:34:03.099160 4765 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.65:50708->38.102.83.65:33367: read tcp 38.102.83.65:50708->38.102.83.65:33367: read: connection reset by peer Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.315822 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ppnnz"] Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.319530 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.487243 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbk6\" (UniqueName: \"kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.487805 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.589950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbk6\" (UniqueName: \"kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.590063 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.590152 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.609043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbk6\" (UniqueName: \"kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6\") pod \"crc-debug-ppnnz\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: I1203 21:34:04.643854 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:04 crc kubenswrapper[4765]: W1203 21:34:04.668663 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36061ef3_1590_4234_a220_702733a8f906.slice/crio-90052edbdcfbfd370c7b2ba7acf94670a7b9fa340e8f3e99072861d5ca28f768 WatchSource:0}: Error finding container 90052edbdcfbfd370c7b2ba7acf94670a7b9fa340e8f3e99072861d5ca28f768: Status 404 returned error can't find the container with id 90052edbdcfbfd370c7b2ba7acf94670a7b9fa340e8f3e99072861d5ca28f768 Dec 03 21:34:05 crc kubenswrapper[4765]: I1203 21:34:05.003820 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" event={"ID":"36061ef3-1590-4234-a220-702733a8f906","Type":"ContainerStarted","Data":"90052edbdcfbfd370c7b2ba7acf94670a7b9fa340e8f3e99072861d5ca28f768"} Dec 03 21:34:15 crc kubenswrapper[4765]: I1203 21:34:15.360877 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:34:15 crc kubenswrapper[4765]: E1203 21:34:15.361527 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:34:17 crc kubenswrapper[4765]: I1203 21:34:17.146812 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" event={"ID":"36061ef3-1590-4234-a220-702733a8f906","Type":"ContainerStarted","Data":"7cad4108802af273d75ade660defb9f3aaa9239805b7cefbac8ce04ea4e02878"} Dec 03 21:34:17 crc kubenswrapper[4765]: I1203 21:34:17.168230 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" podStartSLOduration=1.247279255 podStartE2EDuration="13.168209002s" podCreationTimestamp="2025-12-03 21:34:04 +0000 UTC" firstStartedPulling="2025-12-03 21:34:04.671036261 +0000 UTC m=+3342.601581412" lastFinishedPulling="2025-12-03 21:34:16.591966008 +0000 UTC m=+3354.522511159" observedRunningTime="2025-12-03 21:34:17.159758888 +0000 UTC m=+3355.090304049" watchObservedRunningTime="2025-12-03 21:34:17.168209002 +0000 UTC m=+3355.098754153" Dec 03 21:34:29 crc kubenswrapper[4765]: I1203 21:34:29.360092 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:34:29 crc kubenswrapper[4765]: E1203 21:34:29.360960 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:34:35 crc kubenswrapper[4765]: I1203 21:34:35.348996 4765 generic.go:334] "Generic (PLEG): container finished" podID="36061ef3-1590-4234-a220-702733a8f906" containerID="7cad4108802af273d75ade660defb9f3aaa9239805b7cefbac8ce04ea4e02878" exitCode=0 Dec 03 21:34:35 crc kubenswrapper[4765]: I1203 21:34:35.349078 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" event={"ID":"36061ef3-1590-4234-a220-702733a8f906","Type":"ContainerDied","Data":"7cad4108802af273d75ade660defb9f3aaa9239805b7cefbac8ce04ea4e02878"} Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.491717 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.523516 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ppnnz"] Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.531153 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ppnnz"] Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.648780 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host\") pod \"36061ef3-1590-4234-a220-702733a8f906\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.648982 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gbk6\" (UniqueName: \"kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6\") pod \"36061ef3-1590-4234-a220-702733a8f906\" (UID: \"36061ef3-1590-4234-a220-702733a8f906\") " Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.649827 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host" (OuterVolumeSpecName: "host") pod "36061ef3-1590-4234-a220-702733a8f906" (UID: "36061ef3-1590-4234-a220-702733a8f906"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.655038 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6" (OuterVolumeSpecName: "kube-api-access-9gbk6") pod "36061ef3-1590-4234-a220-702733a8f906" (UID: "36061ef3-1590-4234-a220-702733a8f906"). InnerVolumeSpecName "kube-api-access-9gbk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.751365 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36061ef3-1590-4234-a220-702733a8f906-host\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:36 crc kubenswrapper[4765]: I1203 21:34:36.751410 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gbk6\" (UniqueName: \"kubernetes.io/projected/36061ef3-1590-4234-a220-702733a8f906-kube-api-access-9gbk6\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.367321 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90052edbdcfbfd370c7b2ba7acf94670a7b9fa340e8f3e99072861d5ca28f768" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.367374 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ppnnz" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.776568 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ntpcw"] Dec 03 21:34:37 crc kubenswrapper[4765]: E1203 21:34:37.777006 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36061ef3-1590-4234-a220-702733a8f906" containerName="container-00" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.777020 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="36061ef3-1590-4234-a220-702733a8f906" containerName="container-00" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.777235 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="36061ef3-1590-4234-a220-702733a8f906" containerName="container-00" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.777967 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.871256 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.871499 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zts\" (UniqueName: \"kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.973854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.973977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zts\" (UniqueName: \"kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:37 crc kubenswrapper[4765]: I1203 21:34:37.974076 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:38 crc kubenswrapper[4765]: I1203 21:34:38.001202 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zts\" (UniqueName: \"kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts\") pod \"crc-debug-ntpcw\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:38 crc kubenswrapper[4765]: I1203 21:34:38.103813 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:38 crc kubenswrapper[4765]: I1203 21:34:38.372375 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36061ef3-1590-4234-a220-702733a8f906" path="/var/lib/kubelet/pods/36061ef3-1590-4234-a220-702733a8f906/volumes" Dec 03 21:34:38 crc kubenswrapper[4765]: I1203 21:34:38.381470 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" event={"ID":"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef","Type":"ContainerStarted","Data":"0ab1db76ecc027b4cfb5c6f79d59e1faa3a3224345ae205594a799986fcfa857"} Dec 03 21:34:39 crc kubenswrapper[4765]: I1203 21:34:39.390852 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" containerID="6fb678504b930d6e8856f7e69d2836609f0ba23f7a8680a8454af6b7a7fe9842" exitCode=1 Dec 03 21:34:39 crc kubenswrapper[4765]: I1203 21:34:39.390947 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" event={"ID":"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef","Type":"ContainerDied","Data":"6fb678504b930d6e8856f7e69d2836609f0ba23f7a8680a8454af6b7a7fe9842"} Dec 03 21:34:39 crc kubenswrapper[4765]: I1203 21:34:39.427519 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ntpcw"] Dec 03 21:34:39 crc kubenswrapper[4765]: I1203 21:34:39.436803 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wnz7b/crc-debug-ntpcw"] Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.512173 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.633995 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host\") pod \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.634089 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host" (OuterVolumeSpecName: "host") pod "cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" (UID: "cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.634220 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zts\" (UniqueName: \"kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts\") pod \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\" (UID: \"cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef\") " Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.634774 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-host\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.671591 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts" (OuterVolumeSpecName: "kube-api-access-l7zts") pod "cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" (UID: "cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef"). InnerVolumeSpecName "kube-api-access-l7zts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:34:40 crc kubenswrapper[4765]: I1203 21:34:40.736124 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7zts\" (UniqueName: \"kubernetes.io/projected/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef-kube-api-access-l7zts\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:41 crc kubenswrapper[4765]: I1203 21:34:41.408601 4765 scope.go:117] "RemoveContainer" containerID="6fb678504b930d6e8856f7e69d2836609f0ba23f7a8680a8454af6b7a7fe9842" Dec 03 21:34:41 crc kubenswrapper[4765]: I1203 21:34:41.408667 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/crc-debug-ntpcw" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.365595 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:34:42 crc kubenswrapper[4765]: E1203 21:34:42.365864 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.379673 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" path="/var/lib/kubelet/pods/cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef/volumes" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.380380 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:42 crc kubenswrapper[4765]: E1203 21:34:42.380783 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" containerName="container-00" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.380804 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" containerName="container-00" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.381064 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3371ad-f9f1-4af8-91dd-c2dd3ce254ef" containerName="container-00" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.383077 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.383589 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.471065 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.471137 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.471189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8xx\" (UniqueName: \"kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.573041 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.573103 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.573153 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc8xx\" (UniqueName: \"kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.573574 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.573911 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.602229 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc8xx\" (UniqueName: \"kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx\") pod \"community-operators-xxvmq\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:42 crc kubenswrapper[4765]: I1203 21:34:42.703947 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:43 crc kubenswrapper[4765]: I1203 21:34:43.275729 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:43 crc kubenswrapper[4765]: I1203 21:34:43.433771 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerStarted","Data":"9044d4612fc633289d8581ac70755a817b12598652a31a03aabeba3c539b1723"} Dec 03 21:34:43 crc kubenswrapper[4765]: I1203 21:34:43.959206 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:43 crc kubenswrapper[4765]: I1203 21:34:43.961049 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:43 crc kubenswrapper[4765]: I1203 21:34:43.973865 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.123918 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzz5\" (UniqueName: \"kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.124001 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.124060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.226179 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzz5\" (UniqueName: \"kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.226249 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.226371 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.226908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.226910 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.248017 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzz5\" (UniqueName: \"kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5\") pod \"redhat-marketplace-t5g4x\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.331498 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.455279 4765 generic.go:334] "Generic (PLEG): container finished" podID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerID="85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44" exitCode=0 Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.455536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerDied","Data":"85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44"} Dec 03 21:34:44 crc kubenswrapper[4765]: W1203 21:34:44.851401 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8981db3a_0b5e_40d4_bd4c_ec3cc725f428.slice/crio-bdf9d6d4769c4d295a51f17c5d3351cc5d8ab8afd8a3fc60d52278c7b8cd3521 WatchSource:0}: Error finding container bdf9d6d4769c4d295a51f17c5d3351cc5d8ab8afd8a3fc60d52278c7b8cd3521: Status 404 returned error can't find the container with id bdf9d6d4769c4d295a51f17c5d3351cc5d8ab8afd8a3fc60d52278c7b8cd3521 Dec 03 21:34:44 crc kubenswrapper[4765]: I1203 21:34:44.854105 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:45 crc kubenswrapper[4765]: I1203 21:34:45.484954 4765 generic.go:334] "Generic (PLEG): container finished" podID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerID="6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20" exitCode=0 Dec 03 21:34:45 crc kubenswrapper[4765]: I1203 21:34:45.485357 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerDied","Data":"6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20"} Dec 03 21:34:45 crc kubenswrapper[4765]: I1203 21:34:45.485418 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerStarted","Data":"bdf9d6d4769c4d295a51f17c5d3351cc5d8ab8afd8a3fc60d52278c7b8cd3521"} Dec 03 21:34:46 crc kubenswrapper[4765]: I1203 21:34:46.497514 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerStarted","Data":"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d"} Dec 03 21:34:47 crc kubenswrapper[4765]: I1203 21:34:47.508199 4765 generic.go:334] "Generic (PLEG): container finished" podID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerID="e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06" exitCode=0 Dec 03 21:34:47 crc kubenswrapper[4765]: I1203 21:34:47.508277 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerDied","Data":"e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06"} Dec 03 21:34:47 crc kubenswrapper[4765]: I1203 21:34:47.511225 4765 generic.go:334] "Generic (PLEG): container finished" podID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerID="32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d" exitCode=0 Dec 03 21:34:47 crc kubenswrapper[4765]: I1203 21:34:47.511260 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerDied","Data":"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d"} Dec 03 21:34:48 crc kubenswrapper[4765]: I1203 21:34:48.523129 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerStarted","Data":"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0"} Dec 03 21:34:48 crc kubenswrapper[4765]: I1203 21:34:48.525157 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerStarted","Data":"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05"} Dec 03 21:34:48 crc kubenswrapper[4765]: I1203 21:34:48.571418 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xxvmq" podStartSLOduration=3.115118199 podStartE2EDuration="6.571400384s" podCreationTimestamp="2025-12-03 21:34:42 +0000 UTC" firstStartedPulling="2025-12-03 21:34:44.457348547 +0000 UTC m=+3382.387893698" lastFinishedPulling="2025-12-03 21:34:47.913630712 +0000 UTC m=+3385.844175883" observedRunningTime="2025-12-03 21:34:48.57012125 +0000 UTC m=+3386.500666401" watchObservedRunningTime="2025-12-03 21:34:48.571400384 +0000 UTC m=+3386.501945535" Dec 03 21:34:48 crc kubenswrapper[4765]: I1203 21:34:48.575110 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5g4x" podStartSLOduration=3.1452185249999998 podStartE2EDuration="5.575102974s" podCreationTimestamp="2025-12-03 21:34:43 +0000 UTC" firstStartedPulling="2025-12-03 21:34:45.488403158 +0000 UTC m=+3383.418948309" lastFinishedPulling="2025-12-03 21:34:47.918287447 +0000 UTC m=+3385.848832758" observedRunningTime="2025-12-03 21:34:48.548046125 +0000 UTC m=+3386.478591266" watchObservedRunningTime="2025-12-03 21:34:48.575102974 +0000 UTC m=+3386.505648125" Dec 03 21:34:52 crc kubenswrapper[4765]: I1203 21:34:52.704734 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:52 crc kubenswrapper[4765]: I1203 21:34:52.705349 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:52 crc kubenswrapper[4765]: I1203 21:34:52.753011 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:53 crc kubenswrapper[4765]: I1203 21:34:53.650672 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:54 crc kubenswrapper[4765]: I1203 21:34:54.332017 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:54 crc kubenswrapper[4765]: I1203 21:34:54.332064 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:54 crc kubenswrapper[4765]: I1203 21:34:54.390043 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:54 crc kubenswrapper[4765]: I1203 21:34:54.656098 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:56 crc kubenswrapper[4765]: I1203 21:34:56.161216 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:56 crc kubenswrapper[4765]: I1203 21:34:56.564832 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:56 crc kubenswrapper[4765]: I1203 21:34:56.565186 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xxvmq" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="registry-server" containerID="cri-o://7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05" gracePeriod=2 Dec 03 21:34:56 crc kubenswrapper[4765]: I1203 21:34:56.623726 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5g4x" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="registry-server" containerID="cri-o://2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0" gracePeriod=2 Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.159483 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.166246 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242465 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content\") pod \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242642 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc8xx\" (UniqueName: \"kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx\") pod \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities\") pod \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242786 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzz5\" (UniqueName: \"kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5\") pod \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\" (UID: \"8981db3a-0b5e-40d4-bd4c-ec3cc725f428\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242859 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content\") pod \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.242970 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities\") pod \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\" (UID: \"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369\") " Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.244617 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities" (OuterVolumeSpecName: "utilities") pod "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" (UID: "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.248030 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities" (OuterVolumeSpecName: "utilities") pod "8981db3a-0b5e-40d4-bd4c-ec3cc725f428" (UID: "8981db3a-0b5e-40d4-bd4c-ec3cc725f428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.253826 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5" (OuterVolumeSpecName: "kube-api-access-zrzz5") pod "8981db3a-0b5e-40d4-bd4c-ec3cc725f428" (UID: "8981db3a-0b5e-40d4-bd4c-ec3cc725f428"). InnerVolumeSpecName "kube-api-access-zrzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.254008 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx" (OuterVolumeSpecName: "kube-api-access-sc8xx") pod "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" (UID: "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369"). InnerVolumeSpecName "kube-api-access-sc8xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.265119 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8981db3a-0b5e-40d4-bd4c-ec3cc725f428" (UID: "8981db3a-0b5e-40d4-bd4c-ec3cc725f428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.311639 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" (UID: "21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344548 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc8xx\" (UniqueName: \"kubernetes.io/projected/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-kube-api-access-sc8xx\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344582 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344591 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrzz5\" (UniqueName: \"kubernetes.io/projected/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-kube-api-access-zrzz5\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344601 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344609 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.344617 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8981db3a-0b5e-40d4-bd4c-ec3cc725f428-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.359912 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.360346 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.636808 4765 generic.go:334] "Generic (PLEG): container finished" podID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerID="2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0" exitCode=0 Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.636887 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5g4x" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.636907 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerDied","Data":"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0"} Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.637391 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5g4x" event={"ID":"8981db3a-0b5e-40d4-bd4c-ec3cc725f428","Type":"ContainerDied","Data":"bdf9d6d4769c4d295a51f17c5d3351cc5d8ab8afd8a3fc60d52278c7b8cd3521"} Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.637435 4765 scope.go:117] "RemoveContainer" containerID="2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.640398 4765 generic.go:334] "Generic (PLEG): container finished" podID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerID="7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05" exitCode=0 Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.640444 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerDied","Data":"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05"} Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.640472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxvmq" event={"ID":"21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369","Type":"ContainerDied","Data":"9044d4612fc633289d8581ac70755a817b12598652a31a03aabeba3c539b1723"} Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.640506 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxvmq" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.655695 4765 scope.go:117] "RemoveContainer" containerID="e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.680518 4765 scope.go:117] "RemoveContainer" containerID="6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.687247 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.698035 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5g4x"] Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.704515 4765 scope.go:117] "RemoveContainer" containerID="2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.708002 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0\": container with ID starting with 2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0 not found: ID does not exist" containerID="2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.708056 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0"} err="failed to get container status \"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0\": rpc error: code = NotFound desc = could not find container \"2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0\": container with ID starting with 2ebf7d24174fa18f40945dc44aef2ddff381e3c6c9594d52763dba3d022d89d0 not found: ID does not exist" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.708089 4765 scope.go:117] "RemoveContainer" containerID="e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.708456 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.708765 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06\": container with ID starting with e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06 not found: ID does not exist" containerID="e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.708793 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06"} err="failed to get container status \"e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06\": rpc error: code = NotFound desc = could not find container \"e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06\": container with ID starting with e4dd4d9b52fecdccb913236bbc9930523f9a538f944a245029a005ad157f8f06 not found: ID does not exist" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.708818 4765 scope.go:117] "RemoveContainer" containerID="6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.709200 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20\": container with ID starting with 6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20 not found: ID does not exist" containerID="6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.709223 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20"} err="failed to get container status \"6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20\": rpc error: code = NotFound desc = could not find container \"6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20\": container with ID starting with 6f12b02c0c8e2b28ee56115e003cc0a9ce763ec24a0a9cef6d4e003d1cfa5d20 not found: ID does not exist" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.709236 4765 scope.go:117] "RemoveContainer" containerID="7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.718996 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xxvmq"] Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.731094 4765 scope.go:117] "RemoveContainer" containerID="32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.780523 4765 scope.go:117] "RemoveContainer" containerID="85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.820491 4765 scope.go:117] "RemoveContainer" containerID="7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.820941 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05\": container with ID starting with 7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05 not found: ID does not exist" containerID="7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.820987 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05"} err="failed to get container status \"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05\": rpc error: code = NotFound desc = could not find container \"7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05\": container with ID starting with 7adb7d5da2d7b682b4f207173d434be0b0eaadd55ed95b7b798f3985d3ef4e05 not found: ID does not exist" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.821019 4765 scope.go:117] "RemoveContainer" containerID="32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.821508 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d\": container with ID starting with 32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d not found: ID does not exist" containerID="32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.821552 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d"} err="failed to get container status \"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d\": rpc error: code = NotFound desc = could not find container \"32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d\": container with ID starting with 32c249131b3e489fdd101cab83a7302c3c103b179d7bea9bb7f181512321414d not found: ID does not exist" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.821578 4765 scope.go:117] "RemoveContainer" containerID="85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44" Dec 03 21:34:57 crc kubenswrapper[4765]: E1203 21:34:57.821934 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44\": container with ID starting with 85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44 not found: ID does not exist" containerID="85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44" Dec 03 21:34:57 crc kubenswrapper[4765]: I1203 21:34:57.821965 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44"} err="failed to get container status \"85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44\": rpc error: code = NotFound desc = could not find container \"85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44\": container with ID starting with 85c930f0eaf13463dab53844afb13dd70b2ccaf7d99b6b7feb6a3582d8e81b44 not found: ID does not exist" Dec 03 21:34:58 crc kubenswrapper[4765]: I1203 21:34:58.388691 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" path="/var/lib/kubelet/pods/21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369/volumes" Dec 03 21:34:58 crc kubenswrapper[4765]: I1203 21:34:58.389632 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" path="/var/lib/kubelet/pods/8981db3a-0b5e-40d4-bd4c-ec3cc725f428/volumes" Dec 03 21:35:08 crc kubenswrapper[4765]: I1203 21:35:08.359794 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:35:08 crc kubenswrapper[4765]: E1203 21:35:08.360731 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:35:21 crc kubenswrapper[4765]: I1203 21:35:21.359960 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:35:21 crc kubenswrapper[4765]: E1203 21:35:21.361342 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:35:36 crc kubenswrapper[4765]: I1203 21:35:36.360565 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:35:37 crc kubenswrapper[4765]: I1203 21:35:37.109944 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d"} Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.343733 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7455d9cf5d-nflkk_f0226955-fe8e-4128-8c2e-66d0a79ee3ad/barbican-api/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.483653 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7455d9cf5d-nflkk_f0226955-fe8e-4128-8c2e-66d0a79ee3ad/barbican-api-log/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.543962 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65ffb6446-hdn74_a90044b4-b1fd-4c11-bb40-b52bf1a912f8/barbican-keystone-listener/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.595638 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65ffb6446-hdn74_a90044b4-b1fd-4c11-bb40-b52bf1a912f8/barbican-keystone-listener-log/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.726210 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbbc67fdf-scz6t_e0aad3bb-6dd6-4673-b738-2f04849106ce/barbican-worker/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.805167 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbbc67fdf-scz6t_e0aad3bb-6dd6-4673-b738-2f04849106ce/barbican-worker-log/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.958552 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx_e3b2c2f7-5ef3-47e1-bb0e-3298074acb32/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:51 crc kubenswrapper[4765]: I1203 21:35:51.993944 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/ceilometer-central-agent/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.050879 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/ceilometer-notification-agent/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.179948 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/sg-core/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.202869 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/proxy-httpd/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.234282 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx_d5b77ee4-d4b7-48a2-993b-c7e911e88b0d/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.410219 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4_f56cb10b-3bc1-42b6-90e6-8d1802c20167/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.562577 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b340b625-0c86-49d6-8e7f-2bbfa3ab71d7/cinder-api/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.638017 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b340b625-0c86-49d6-8e7f-2bbfa3ab71d7/cinder-api-log/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.755629 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a6827558-2402-4d4f-b230-eb41101a3c41/probe/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.903667 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd57395a-abd0-4768-b1e9-0cdf5a9930d3/cinder-scheduler/0.log" Dec 03 21:35:52 crc kubenswrapper[4765]: I1203 21:35:52.992512 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a6827558-2402-4d4f-b230-eb41101a3c41/cinder-backup/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.022051 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd57395a-abd0-4768-b1e9-0cdf5a9930d3/probe/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.226785 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_090dfe86-44b6-4444-9075-abfc758bc2e4/cinder-volume/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.228145 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_090dfe86-44b6-4444-9075-abfc758bc2e4/probe/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.373839 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7_c766674f-ed9a-4a8c-8c83-a94542469c60/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.414499 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-45dqw_0acda383-efb7-45c7-8ead-19f3bb2bac36/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.563398 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/init/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.754587 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/init/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.790171 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/dnsmasq-dns/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.804283 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43154ec4-ba15-4d12-afeb-a3528c1269c8/glance-httpd/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.928202 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43154ec4-ba15-4d12-afeb-a3528c1269c8/glance-log/0.log" Dec 03 21:35:53 crc kubenswrapper[4765]: I1203 21:35:53.980592 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d2d1fba0-111f-49ed-9992-e75c8f53d277/glance-httpd/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.015050 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d2d1fba0-111f-49ed-9992-e75c8f53d277/glance-log/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.281462 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-754897654-c5z9l_742566d1-3d02-42ea-8db1-e482ff699ada/horizon-log/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.302057 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-754897654-c5z9l_742566d1-3d02-42ea-8db1-e482ff699ada/horizon/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.412061 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r_51f7c3b1-f566-4371-ad1d-487bbfa1be12/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.474681 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p2brn_b6cce00b-a9f8-4d5e-abbf-0e72ce498b52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.727584 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-57c6f94f6-xmzln_4e9b168b-07ea-4870-ba96-9680c4530133/keystone-api/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.738225 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413261-sllgz_d52a513d-e85f-4c95-9188-8748e9f08c2b/keystone-cron/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.904498 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b/kube-state-metrics/0.log" Dec 03 21:35:54 crc kubenswrapper[4765]: I1203 21:35:54.978467 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw_52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.151146 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fdd97dcb-bc57-4867-a85d-be547f7b716f/manila-api-log/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.202678 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fdd97dcb-bc57-4867-a85d-be547f7b716f/manila-api/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.216032 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-vmq9m_7dd082e4-2821-403e-a3c7-d25b6b09d645/mariadb-database-create/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.343762 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-wvfd8_53a10b13-ab07-4448-bbaa-f2077c07c07d/manila-db-sync/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.412744 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-f585-account-create-update-dnq4r_736bb387-1ef7-4b32-9421-c6c8133d3e3c/mariadb-account-create-update/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.635130 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8dd498cd-6ec2-4d8f-ad18-72aae897e33e/manila-scheduler/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.643137 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7d62facf-5ee9-45cf-a031-15834157a662/manila-share/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.651810 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8dd498cd-6ec2-4d8f-ad18-72aae897e33e/probe/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.701563 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7d62facf-5ee9-45cf-a031-15834157a662/probe/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.956500 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8458f9f649-c6lrl_1f78f95a-adb3-4939-a8f0-3fdd4d3757da/neutron-api/0.log" Dec 03 21:35:55 crc kubenswrapper[4765]: I1203 21:35:55.956913 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8458f9f649-c6lrl_1f78f95a-adb3-4939-a8f0-3fdd4d3757da/neutron-httpd/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.160726 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7_360b19b3-c391-467e-ab4c-f7cb150873ea/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.545928 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3f90377-318a-4a36-a187-62434c1fb8c3/nova-api-log/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.594633 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fae73c00-5619-4157-9bf2-4996314616aa/nova-cell0-conductor-conductor/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.647504 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3f90377-318a-4a36-a187-62434c1fb8c3/nova-api-api/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.843277 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7c9a4172-479b-4188-9264-208492b2be91/nova-cell1-conductor-conductor/0.log" Dec 03 21:35:56 crc kubenswrapper[4765]: I1203 21:35:56.862043 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c40d979f-5978-45a1-9b88-b4587eb142c2/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.096076 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf_d13320a0-48f4-4813-9692-9554f411d998/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.149272 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c11f148f-d7db-4776-a326-cb655caf8b19/nova-metadata-log/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.360679 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6963989c-bc38-471a-a22a-c7e90de20bf9/nova-scheduler-scheduler/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.532077 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/mysql-bootstrap/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.667104 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/galera/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.685621 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/mysql-bootstrap/0.log" Dec 03 21:35:57 crc kubenswrapper[4765]: I1203 21:35:57.865731 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/mysql-bootstrap/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.029778 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/mysql-bootstrap/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.067794 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/galera/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.195392 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c11f148f-d7db-4776-a326-cb655caf8b19/nova-metadata-metadata/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.202101 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bd00d94a-54ce-420e-959d-4b10ecce11d0/openstackclient/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.336501 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f85pk_2a9aeba1-759a-41ad-a871-5cfa33de5aae/ovn-controller/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.683290 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4b7vx_70312ced-15b1-4366-aa36-c32538b61141/openstack-network-exporter/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.719788 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server-init/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.851109 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server-init/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.865226 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server/0.log" Dec 03 21:35:58 crc kubenswrapper[4765]: I1203 21:35:58.885151 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovs-vswitchd/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.111411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_403709eb-a3d4-4e89-ac92-de401056e3d0/openstack-network-exporter/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.121519 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nbhcq_acf5a824-dd5c-412f-a7b2-848352ec8eaa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.173274 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_403709eb-a3d4-4e89-ac92-de401056e3d0/ovn-northd/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.315911 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ae88784b-a398-447a-aaba-b2c2e1c7dc48/openstack-network-exporter/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.397519 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ae88784b-a398-447a-aaba-b2c2e1c7dc48/ovsdbserver-nb/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.518727 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba74cb76-f80f-4396-9ddb-1eeec6c21fd6/openstack-network-exporter/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.543039 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba74cb76-f80f-4396-9ddb-1eeec6c21fd6/ovsdbserver-sb/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.698713 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c98cc54bd-jknm9_e714435f-b27b-485e-82cb-4cd1f1491cac/placement-api/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.796071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/setup-container/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.847598 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c98cc54bd-jknm9_e714435f-b27b-485e-82cb-4cd1f1491cac/placement-log/0.log" Dec 03 21:35:59 crc kubenswrapper[4765]: I1203 21:35:59.984908 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/setup-container/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.001243 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/rabbitmq/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.104977 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/setup-container/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.335180 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/rabbitmq/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.366218 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/setup-container/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.399478 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk_405fb54f-da87-4598-8f88-b9cb64799a12/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.579382 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5_47b92082-05ae-430d-bdfd-836be92480a8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.674102 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hg7nd_743d2875-36d7-427b-af2e-c8a8e8d5a81c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.845592 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sxk7z_d1f5d4f1-df58-457b-b56a-64c6cda175a4/ssh-known-hosts-edpm-deployment/0.log" Dec 03 21:36:00 crc kubenswrapper[4765]: I1203 21:36:00.900460 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a4425100-38b1-43b3-90ba-8691dcf4d4aa/tempest-tests-tempest-tests-runner/0.log" Dec 03 21:36:01 crc kubenswrapper[4765]: I1203 21:36:01.010112 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_14c03184-5d99-4a39-99ba-605dd4c44040/test-operator-logs-container/0.log" Dec 03 21:36:01 crc kubenswrapper[4765]: I1203 21:36:01.158934 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t_db43dd3d-a5b5-4cc3-bfbd-18689908b450/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:36:21 crc kubenswrapper[4765]: I1203 21:36:21.050710 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ffa82a93-b10c-4414-be93-7d003c7917e9/memcached/0.log" Dec 03 21:36:28 crc kubenswrapper[4765]: I1203 21:36:28.344669 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ww6rq_d17f6ecc-799c-415b-98e2-67f859a96a1a/kube-rbac-proxy/0.log" Dec 03 21:36:28 crc kubenswrapper[4765]: I1203 21:36:28.423223 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ww6rq_d17f6ecc-799c-415b-98e2-67f859a96a1a/manager/0.log" Dec 03 21:36:28 crc kubenswrapper[4765]: I1203 21:36:28.563523 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-mvdp4_e7dd69d2-65b2-4677-b6ac-e90fd4c695c1/kube-rbac-proxy/0.log" Dec 03 21:36:28 crc kubenswrapper[4765]: I1203 21:36:28.634559 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-mvdp4_e7dd69d2-65b2-4677-b6ac-e90fd4c695c1/manager/0.log" Dec 03 21:36:28 crc kubenswrapper[4765]: I1203 21:36:28.722237 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.024163 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.032319 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.060655 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.229603 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.231367 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.298321 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/extract/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.438898 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-czxt5_50b1a98b-3f25-4b3f-9f55-fa99f3911561/manager/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.445868 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-czxt5_50b1a98b-3f25-4b3f-9f55-fa99f3911561/kube-rbac-proxy/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.514553 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6d7f88c74f-76fch_84cb39fe-086b-4822-b54f-a5af68d2203c/kube-rbac-proxy/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.687402 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-m9fpm_48ba0b62-8ac2-4059-ac6a-8643ee1ad149/kube-rbac-proxy/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.709800 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6d7f88c74f-76fch_84cb39fe-086b-4822-b54f-a5af68d2203c/manager/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.767611 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-m9fpm_48ba0b62-8ac2-4059-ac6a-8643ee1ad149/manager/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.856193 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-9cdp5_797a4394-d04a-491b-8008-819165536dc0/kube-rbac-proxy/0.log" Dec 03 21:36:29 crc kubenswrapper[4765]: I1203 21:36:29.895038 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-9cdp5_797a4394-d04a-491b-8008-819165536dc0/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.032990 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7fw8v_6ba1b815-d381-4999-9d4d-9b9b595f6d06/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.165695 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-vvxrw_4527f93e-9514-4750-9f1a-45d2fc649ef2/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.219725 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7fw8v_6ba1b815-d381-4999-9d4d-9b9b595f6d06/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.237566 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-vvxrw_4527f93e-9514-4750-9f1a-45d2fc649ef2/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.336833 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z6lzn_a3cc780d-abf0-4a2b-99c3-67f9602a782f/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.439857 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z6lzn_a3cc780d-abf0-4a2b-99c3-67f9602a782f/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.502316 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-tjlbs_65cf60b9-98a5-4fe7-8675-28aadb893c7c/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.578945 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-tjlbs_65cf60b9-98a5-4fe7-8675-28aadb893c7c/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.607129 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-442kz_8d1cf8df-8469-41f4-a801-040210dfbb9f/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.701013 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-442kz_8d1cf8df-8469-41f4-a801-040210dfbb9f/manager/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.810396 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-f4g9d_df89edd4-fc6d-4b27-8947-fbe909852d74/kube-rbac-proxy/0.log" Dec 03 21:36:30 crc kubenswrapper[4765]: I1203 21:36:30.913118 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-f4g9d_df89edd4-fc6d-4b27-8947-fbe909852d74/manager/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.011531 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-x2qpv_5f6f097a-e817-4f45-91fd-3c2d9d6b8d52/kube-rbac-proxy/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.101278 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-x2qpv_5f6f097a-e817-4f45-91fd-3c2d9d6b8d52/manager/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.175961 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bbb8g_bbbe5e38-0e74-426e-9ada-b2d8be5f8444/kube-rbac-proxy/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.234962 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bbb8g_bbbe5e38-0e74-426e-9ada-b2d8be5f8444/manager/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.349314 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96_5e62f5de-bd17-4c8d-bc3f-0ce237d6e266/kube-rbac-proxy/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.391760 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96_5e62f5de-bd17-4c8d-bc3f-0ce237d6e266/manager/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.744376 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54ccb7f4-f26lq_59d4b087-73be-498b-b8f7-d6b067002ad5/operator/0.log" Dec 03 21:36:31 crc kubenswrapper[4765]: I1203 21:36:31.877572 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qn2lg_86270547-80b6-44d5-971f-c260b5b7a106/registry-server/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.063396 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-n9556_5a7474c6-a9ec-40ba-8d04-49166a15bab5/kube-rbac-proxy/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.181491 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-n9556_5a7474c6-a9ec-40ba-8d04-49166a15bab5/manager/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.279545 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-dthq2_f1d3e370-5bea-4bc9-9269-7483387b6e31/kube-rbac-proxy/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.437312 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-dthq2_f1d3e370-5bea-4bc9-9269-7483387b6e31/manager/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.566254 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-b8kv2_47ff88bb-97bc-4d0b-a24b-64559741aa30/operator/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.662037 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wmrgj_f0dd713c-31a7-4816-9044-bf59d8931367/kube-rbac-proxy/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.664829 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-547c884594-d98p4_19b04cd5-57c6-4494-a08b-f425c37bf13a/manager/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.685256 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wmrgj_f0dd713c-31a7-4816-9044-bf59d8931367/manager/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.761515 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-w955s_64675126-66c0-4cac-ad4e-764c10e0c344/kube-rbac-proxy/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.889517 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h7pk2_629580d2-72ea-481f-b78e-e5b6631dfda4/kube-rbac-proxy/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.895026 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-w955s_64675126-66c0-4cac-ad4e-764c10e0c344/manager/0.log" Dec 03 21:36:32 crc kubenswrapper[4765]: I1203 21:36:32.974252 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h7pk2_629580d2-72ea-481f-b78e-e5b6631dfda4/manager/0.log" Dec 03 21:36:33 crc kubenswrapper[4765]: I1203 21:36:33.068189 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-f5s59_016c4fd7-25b8-42b0-ba5d-1008cd28b8b3/kube-rbac-proxy/0.log" Dec 03 21:36:33 crc kubenswrapper[4765]: I1203 21:36:33.085720 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-f5s59_016c4fd7-25b8-42b0-ba5d-1008cd28b8b3/manager/0.log" Dec 03 21:36:53 crc kubenswrapper[4765]: I1203 21:36:53.955666 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qv6v8_bd1c3235-df64-48e7-9c08-e7ee70c8fe49/control-plane-machine-set-operator/0.log" Dec 03 21:36:54 crc kubenswrapper[4765]: I1203 21:36:54.129452 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vj5h7_1033ee94-376d-4190-8e79-ce0d34031aed/kube-rbac-proxy/0.log" Dec 03 21:36:54 crc kubenswrapper[4765]: I1203 21:36:54.151577 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vj5h7_1033ee94-376d-4190-8e79-ce0d34031aed/machine-api-operator/0.log" Dec 03 21:37:09 crc kubenswrapper[4765]: I1203 21:37:09.077431 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-h5t5j_42acdc1c-8668-4544-886a-4346236c7e76/cert-manager-controller/0.log" Dec 03 21:37:09 crc kubenswrapper[4765]: I1203 21:37:09.227237 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vf7sc_aca9fa03-3bb8-4912-aa71-037533fe4b0d/cert-manager-cainjector/0.log" Dec 03 21:37:09 crc kubenswrapper[4765]: I1203 21:37:09.261429 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-twkk9_293b6288-6f0b-4e96-815a-3dffcd7a641c/cert-manager-webhook/0.log" Dec 03 21:37:22 crc kubenswrapper[4765]: I1203 21:37:22.777430 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-fd5mm_a66f7626-aad6-4d61-91e8-b764b50c5e0b/nmstate-console-plugin/0.log" Dec 03 21:37:22 crc kubenswrapper[4765]: I1203 21:37:22.955816 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-w2v9s_a026c029-77f9-4020-8c0f-6655cbc1dcb6/nmstate-handler/0.log" Dec 03 21:37:22 crc kubenswrapper[4765]: I1203 21:37:22.986283 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdsgs_d51bdd1a-e635-4ebb-863b-aaa822deb666/kube-rbac-proxy/0.log" Dec 03 21:37:22 crc kubenswrapper[4765]: I1203 21:37:22.996729 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdsgs_d51bdd1a-e635-4ebb-863b-aaa822deb666/nmstate-metrics/0.log" Dec 03 21:37:23 crc kubenswrapper[4765]: I1203 21:37:23.149576 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2dfvx_cd09d44f-8050-4a97-a4e9-73ec54239864/nmstate-operator/0.log" Dec 03 21:37:23 crc kubenswrapper[4765]: I1203 21:37:23.213931 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-jlplw_86af5345-1169-4a47-8f7c-215533b0d752/nmstate-webhook/0.log" Dec 03 21:37:37 crc kubenswrapper[4765]: I1203 21:37:37.052566 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-vmq9m"] Dec 03 21:37:37 crc kubenswrapper[4765]: I1203 21:37:37.073914 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-vmq9m"] Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.027156 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-f585-account-create-update-dnq4r"] Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.036846 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-f585-account-create-update-dnq4r"] Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.363425 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-n78jt_b248c7e1-c2a2-4c22-ab0f-fb221be60e58/kube-rbac-proxy/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.374954 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736bb387-1ef7-4b32-9421-c6c8133d3e3c" path="/var/lib/kubelet/pods/736bb387-1ef7-4b32-9421-c6c8133d3e3c/volumes" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.375517 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd082e4-2821-403e-a3c7-d25b6b09d645" path="/var/lib/kubelet/pods/7dd082e4-2821-403e-a3c7-d25b6b09d645/volumes" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.406810 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-n78jt_b248c7e1-c2a2-4c22-ab0f-fb221be60e58/controller/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.509326 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.668947 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.699074 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.707256 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.750724 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.866381 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.890523 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.891145 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:37:38 crc kubenswrapper[4765]: I1203 21:37:38.931165 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.073813 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.098043 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.116284 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/controller/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.117807 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.240985 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/frr-metrics/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.281826 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/kube-rbac-proxy-frr/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.365932 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/kube-rbac-proxy/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.456772 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/reloader/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.571179 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-l7nmz_e875bdde-0dbd-40b6-a84c-1bdd7e4baabf/frr-k8s-webhook-server/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.791352 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-869886bfd4-t75fk_56245235-eef6-472d-b481-1b9d7f80b89c/manager/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.897121 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7494cc9b6f-4zr8g_79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a/webhook-server/0.log" Dec 03 21:37:39 crc kubenswrapper[4765]: I1203 21:37:39.987888 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvsp4_fed0fc97-3d14-4716-ad43-4c3bfd606850/kube-rbac-proxy/0.log" Dec 03 21:37:40 crc kubenswrapper[4765]: I1203 21:37:40.501844 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvsp4_fed0fc97-3d14-4716-ad43-4c3bfd606850/speaker/0.log" Dec 03 21:37:40 crc kubenswrapper[4765]: I1203 21:37:40.614336 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/frr/0.log" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.159960 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161379 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161404 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161443 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="extract-content" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161456 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="extract-content" Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161489 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161502 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161523 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="extract-content" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161560 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="extract-content" Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161582 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="extract-utilities" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161594 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="extract-utilities" Dec 03 21:37:51 crc kubenswrapper[4765]: E1203 21:37:51.161621 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="extract-utilities" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.161632 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="extract-utilities" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.162015 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ceb1b6-0d8f-4f21-b2bc-55b5c8d9a369" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.162082 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8981db3a-0b5e-40d4-bd4c-ec3cc725f428" containerName="registry-server" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.164775 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.183082 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.236566 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.236775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9lx\" (UniqueName: \"kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.236822 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.338866 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.338992 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9lx\" (UniqueName: \"kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.339021 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.339374 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.339673 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.369381 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9lx\" (UniqueName: \"kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx\") pod \"redhat-operators-p57pp\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.482906 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:37:51 crc kubenswrapper[4765]: I1203 21:37:51.972056 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:37:52 crc kubenswrapper[4765]: I1203 21:37:52.419341 4765 generic.go:334] "Generic (PLEG): container finished" podID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerID="c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc" exitCode=0 Dec 03 21:37:52 crc kubenswrapper[4765]: I1203 21:37:52.419379 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerDied","Data":"c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc"} Dec 03 21:37:52 crc kubenswrapper[4765]: I1203 21:37:52.419402 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerStarted","Data":"176c6ac34364948cf155dcc5377550a6ed1519c4c07c91132277109c7113cfd3"} Dec 03 21:37:53 crc kubenswrapper[4765]: I1203 21:37:53.429840 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerStarted","Data":"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0"} Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.424397 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.442349 4765 generic.go:334] "Generic (PLEG): container finished" podID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerID="cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0" exitCode=0 Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.442391 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerDied","Data":"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0"} Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.597936 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.679972 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.743772 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.798201 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.798281 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.869093 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.878994 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/extract/0.log" Dec 03 21:37:54 crc kubenswrapper[4765]: I1203 21:37:54.934122 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.097909 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.226786 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.256464 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.270503 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.464342 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerStarted","Data":"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7"} Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.493619 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p57pp" podStartSLOduration=1.987716636 podStartE2EDuration="4.49360302s" podCreationTimestamp="2025-12-03 21:37:51 +0000 UTC" firstStartedPulling="2025-12-03 21:37:52.421318952 +0000 UTC m=+3570.351864113" lastFinishedPulling="2025-12-03 21:37:54.927205346 +0000 UTC m=+3572.857750497" observedRunningTime="2025-12-03 21:37:55.485811609 +0000 UTC m=+3573.416356760" watchObservedRunningTime="2025-12-03 21:37:55.49360302 +0000 UTC m=+3573.424148171" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.548750 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.561020 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/extract/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.697809 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.741759 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.907456 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.942136 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:37:55 crc kubenswrapper[4765]: I1203 21:37:55.955653 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.248936 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.280773 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.476366 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.683126 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/registry-server/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.685012 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.702047 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.721281 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:37:56 crc kubenswrapper[4765]: I1203 21:37:56.912565 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.057713 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.132315 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztctx_72a5b180-7b23-4bfd-a10b-c35f73c732aa/marketplace-operator/1.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.232869 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztctx_72a5b180-7b23-4bfd-a10b-c35f73c732aa/marketplace-operator/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.391506 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/registry-server/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.395325 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.554669 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.556524 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.660483 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.817572 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:37:57 crc kubenswrapper[4765]: I1203 21:37:57.910697 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.020517 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.020566 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/registry-server/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.203193 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.243247 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.263341 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.471395 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.474935 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.561906 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.725337 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-utilities/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.850382 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-content/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.943488 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-content/0.log" Dec 03 21:37:58 crc kubenswrapper[4765]: I1203 21:37:58.970525 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/registry-server/0.log" Dec 03 21:37:59 crc kubenswrapper[4765]: I1203 21:37:59.186872 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-utilities/0.log" Dec 03 21:37:59 crc kubenswrapper[4765]: I1203 21:37:59.233519 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/extract-content/0.log" Dec 03 21:37:59 crc kubenswrapper[4765]: I1203 21:37:59.258778 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p57pp_aefefd88-637c-4bec-b765-5a69cd6bfb2a/registry-server/0.log" Dec 03 21:38:01 crc kubenswrapper[4765]: I1203 21:38:01.483212 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:01 crc kubenswrapper[4765]: I1203 21:38:01.483635 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:02 crc kubenswrapper[4765]: I1203 21:38:02.537495 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p57pp" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="registry-server" probeResult="failure" output=< Dec 03 21:38:02 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:38:02 crc kubenswrapper[4765]: > Dec 03 21:38:06 crc kubenswrapper[4765]: I1203 21:38:06.060531 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-wvfd8"] Dec 03 21:38:06 crc kubenswrapper[4765]: I1203 21:38:06.074405 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-wvfd8"] Dec 03 21:38:06 crc kubenswrapper[4765]: I1203 21:38:06.378366 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a10b13-ab07-4448-bbaa-f2077c07c07d" path="/var/lib/kubelet/pods/53a10b13-ab07-4448-bbaa-f2077c07c07d/volumes" Dec 03 21:38:11 crc kubenswrapper[4765]: I1203 21:38:11.541455 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:11 crc kubenswrapper[4765]: I1203 21:38:11.596563 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:11 crc kubenswrapper[4765]: I1203 21:38:11.795064 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:38:12 crc kubenswrapper[4765]: I1203 21:38:12.645792 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p57pp" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="registry-server" containerID="cri-o://a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7" gracePeriod=2 Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.113694 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.299788 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content\") pod \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.299959 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities\") pod \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.300067 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9lx\" (UniqueName: \"kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx\") pod \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\" (UID: \"aefefd88-637c-4bec-b765-5a69cd6bfb2a\") " Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.301442 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities" (OuterVolumeSpecName: "utilities") pod "aefefd88-637c-4bec-b765-5a69cd6bfb2a" (UID: "aefefd88-637c-4bec-b765-5a69cd6bfb2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.306352 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx" (OuterVolumeSpecName: "kube-api-access-2h9lx") pod "aefefd88-637c-4bec-b765-5a69cd6bfb2a" (UID: "aefefd88-637c-4bec-b765-5a69cd6bfb2a"). InnerVolumeSpecName "kube-api-access-2h9lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.402194 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9lx\" (UniqueName: \"kubernetes.io/projected/aefefd88-637c-4bec-b765-5a69cd6bfb2a-kube-api-access-2h9lx\") on node \"crc\" DevicePath \"\"" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.402231 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.409972 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aefefd88-637c-4bec-b765-5a69cd6bfb2a" (UID: "aefefd88-637c-4bec-b765-5a69cd6bfb2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.504675 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aefefd88-637c-4bec-b765-5a69cd6bfb2a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.658374 4765 generic.go:334] "Generic (PLEG): container finished" podID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerID="a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7" exitCode=0 Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.658420 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerDied","Data":"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7"} Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.658427 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p57pp" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.658450 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p57pp" event={"ID":"aefefd88-637c-4bec-b765-5a69cd6bfb2a","Type":"ContainerDied","Data":"176c6ac34364948cf155dcc5377550a6ed1519c4c07c91132277109c7113cfd3"} Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.658473 4765 scope.go:117] "RemoveContainer" containerID="a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.692267 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.696694 4765 scope.go:117] "RemoveContainer" containerID="cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.705956 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p57pp"] Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.725277 4765 scope.go:117] "RemoveContainer" containerID="c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.767235 4765 scope.go:117] "RemoveContainer" containerID="a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7" Dec 03 21:38:13 crc kubenswrapper[4765]: E1203 21:38:13.767986 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7\": container with ID starting with a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7 not found: ID does not exist" containerID="a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.768171 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7"} err="failed to get container status \"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7\": rpc error: code = NotFound desc = could not find container \"a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7\": container with ID starting with a747d5c60ff5ef98db924b6529138177342a0f907fdb1b112f4ce7a6ccf6d6d7 not found: ID does not exist" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.768273 4765 scope.go:117] "RemoveContainer" containerID="cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0" Dec 03 21:38:13 crc kubenswrapper[4765]: E1203 21:38:13.770591 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0\": container with ID starting with cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0 not found: ID does not exist" containerID="cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.770826 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0"} err="failed to get container status \"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0\": rpc error: code = NotFound desc = could not find container \"cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0\": container with ID starting with cea87d151186495f826753e436e846255c75c1616310fa4cb067a56b7c7f76b0 not found: ID does not exist" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.770957 4765 scope.go:117] "RemoveContainer" containerID="c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc" Dec 03 21:38:13 crc kubenswrapper[4765]: E1203 21:38:13.771570 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc\": container with ID starting with c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc not found: ID does not exist" containerID="c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc" Dec 03 21:38:13 crc kubenswrapper[4765]: I1203 21:38:13.771701 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc"} err="failed to get container status \"c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc\": rpc error: code = NotFound desc = could not find container \"c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc\": container with ID starting with c3ecd220d6035c36e7583f6a8e005a021d826e40a06c9744cc196b26b6eb33fc not found: ID does not exist" Dec 03 21:38:14 crc kubenswrapper[4765]: I1203 21:38:14.372659 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" path="/var/lib/kubelet/pods/aefefd88-637c-4bec-b765-5a69cd6bfb2a/volumes" Dec 03 21:38:22 crc kubenswrapper[4765]: E1203 21:38:22.999075 4765 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.65:43236->38.102.83.65:33367: write tcp 38.102.83.65:43236->38.102.83.65:33367: write: broken pipe Dec 03 21:38:24 crc kubenswrapper[4765]: I1203 21:38:24.798047 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:38:24 crc kubenswrapper[4765]: I1203 21:38:24.799356 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:38:31 crc kubenswrapper[4765]: I1203 21:38:31.283776 4765 scope.go:117] "RemoveContainer" containerID="fdc2a765c2fc31debde1ed8555074faabb99f41382ad8db03ddcf0e0774e2614" Dec 03 21:38:31 crc kubenswrapper[4765]: I1203 21:38:31.339236 4765 scope.go:117] "RemoveContainer" containerID="e7bbe396b0c678906f6ce22da8b34a3b92f64654ccd76e64dd972972239aad01" Dec 03 21:38:31 crc kubenswrapper[4765]: I1203 21:38:31.364275 4765 scope.go:117] "RemoveContainer" containerID="b3b093fa882959b3cad20dad9f74dde41d8600aa14eb85fb0ddf0494880dc0c2" Dec 03 21:38:34 crc kubenswrapper[4765]: E1203 21:38:34.390056 4765 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.65:54308->38.102.83.65:33367: write tcp 38.102.83.65:54308->38.102.83.65:33367: write: broken pipe Dec 03 21:38:54 crc kubenswrapper[4765]: I1203 21:38:54.798860 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:38:54 crc kubenswrapper[4765]: I1203 21:38:54.799418 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:38:54 crc kubenswrapper[4765]: I1203 21:38:54.799471 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:38:54 crc kubenswrapper[4765]: I1203 21:38:54.800324 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:38:54 crc kubenswrapper[4765]: I1203 21:38:54.800408 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d" gracePeriod=600 Dec 03 21:38:55 crc kubenswrapper[4765]: I1203 21:38:55.115541 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d" exitCode=0 Dec 03 21:38:55 crc kubenswrapper[4765]: I1203 21:38:55.115635 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d"} Dec 03 21:38:55 crc kubenswrapper[4765]: I1203 21:38:55.115905 4765 scope.go:117] "RemoveContainer" containerID="32a1c3ba2d0a7e12ac55e7002b4e4242b583315796de9efad25cfce500ce93de" Dec 03 21:38:56 crc kubenswrapper[4765]: I1203 21:38:56.131094 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca"} Dec 03 21:39:38 crc kubenswrapper[4765]: I1203 21:39:38.668623 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc309a07-23e2-493e-870b-d3fa60428deb" containerID="951abac239f4b50592ce4760c2ec273834bda22eb09eb903b8419d3c9731055d" exitCode=0 Dec 03 21:39:38 crc kubenswrapper[4765]: I1203 21:39:38.668723 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wnz7b/must-gather-58cn4" event={"ID":"fc309a07-23e2-493e-870b-d3fa60428deb","Type":"ContainerDied","Data":"951abac239f4b50592ce4760c2ec273834bda22eb09eb903b8419d3c9731055d"} Dec 03 21:39:38 crc kubenswrapper[4765]: I1203 21:39:38.670000 4765 scope.go:117] "RemoveContainer" containerID="951abac239f4b50592ce4760c2ec273834bda22eb09eb903b8419d3c9731055d" Dec 03 21:39:39 crc kubenswrapper[4765]: I1203 21:39:39.380066 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wnz7b_must-gather-58cn4_fc309a07-23e2-493e-870b-d3fa60428deb/gather/0.log" Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.155024 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wnz7b/must-gather-58cn4"] Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.156015 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wnz7b/must-gather-58cn4" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="copy" containerID="cri-o://81aafe1535c511c1a9e26e21f5f0e9bc81380da05b7421cc0c465fc4227eef65" gracePeriod=2 Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.176401 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wnz7b/must-gather-58cn4"] Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.769075 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wnz7b_must-gather-58cn4_fc309a07-23e2-493e-870b-d3fa60428deb/copy/0.log" Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.769584 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc309a07-23e2-493e-870b-d3fa60428deb" containerID="81aafe1535c511c1a9e26e21f5f0e9bc81380da05b7421cc0c465fc4227eef65" exitCode=143 Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.944261 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wnz7b_must-gather-58cn4_fc309a07-23e2-493e-870b-d3fa60428deb/copy/0.log" Dec 03 21:39:47 crc kubenswrapper[4765]: I1203 21:39:47.944894 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.124574 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcwjz\" (UniqueName: \"kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz\") pod \"fc309a07-23e2-493e-870b-d3fa60428deb\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.124687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output\") pod \"fc309a07-23e2-493e-870b-d3fa60428deb\" (UID: \"fc309a07-23e2-493e-870b-d3fa60428deb\") " Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.135593 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz" (OuterVolumeSpecName: "kube-api-access-dcwjz") pod "fc309a07-23e2-493e-870b-d3fa60428deb" (UID: "fc309a07-23e2-493e-870b-d3fa60428deb"). InnerVolumeSpecName "kube-api-access-dcwjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.228392 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcwjz\" (UniqueName: \"kubernetes.io/projected/fc309a07-23e2-493e-870b-d3fa60428deb-kube-api-access-dcwjz\") on node \"crc\" DevicePath \"\"" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.292143 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fc309a07-23e2-493e-870b-d3fa60428deb" (UID: "fc309a07-23e2-493e-870b-d3fa60428deb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.331907 4765 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc309a07-23e2-493e-870b-d3fa60428deb-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.372868 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" path="/var/lib/kubelet/pods/fc309a07-23e2-493e-870b-d3fa60428deb/volumes" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.777477 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wnz7b_must-gather-58cn4_fc309a07-23e2-493e-870b-d3fa60428deb/copy/0.log" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.777823 4765 scope.go:117] "RemoveContainer" containerID="81aafe1535c511c1a9e26e21f5f0e9bc81380da05b7421cc0c465fc4227eef65" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.777871 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wnz7b/must-gather-58cn4" Dec 03 21:39:48 crc kubenswrapper[4765]: I1203 21:39:48.796018 4765 scope.go:117] "RemoveContainer" containerID="951abac239f4b50592ce4760c2ec273834bda22eb09eb903b8419d3c9731055d" Dec 03 21:40:31 crc kubenswrapper[4765]: I1203 21:40:31.593785 4765 scope.go:117] "RemoveContainer" containerID="7cad4108802af273d75ade660defb9f3aaa9239805b7cefbac8ce04ea4e02878" Dec 03 21:41:24 crc kubenswrapper[4765]: I1203 21:41:24.798601 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:41:24 crc kubenswrapper[4765]: I1203 21:41:24.799264 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:41:54 crc kubenswrapper[4765]: I1203 21:41:54.798817 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:41:54 crc kubenswrapper[4765]: I1203 21:41:54.799579 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:41:55 crc kubenswrapper[4765]: I1203 21:41:55.734764 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 03 21:42:00 crc kubenswrapper[4765]: I1203 21:42:00.736503 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 03 21:42:00 crc kubenswrapper[4765]: I1203 21:42:00.737624 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 03 21:42:01 crc kubenswrapper[4765]: I1203 21:42:01.758507 4765 patch_prober.go:28] interesting pod/console-67c7449f96-7h4sh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 21:42:01 crc kubenswrapper[4765]: I1203 21:42:01.758978 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-67c7449f96-7h4sh" podUID="81692b3d-3bdf-49a7-b434-fe3b6a07da87" containerName="console" probeResult="failure" output="Get \"https://10.217.0.41:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 21:42:05 crc kubenswrapper[4765]: I1203 21:42:05.732164 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 03 21:42:05 crc kubenswrapper[4765]: I1203 21:42:05.733216 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 03 21:42:05 crc kubenswrapper[4765]: I1203 21:42:05.735403 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"99e4685f5bcbb791a7206589f2cc4445e69d2424a7c154321180f4ae75a5abfd"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Dec 03 21:42:05 crc kubenswrapper[4765]: I1203 21:42:05.735594 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-central-agent" containerID="cri-o://99e4685f5bcbb791a7206589f2cc4445e69d2424a7c154321180f4ae75a5abfd" gracePeriod=30 Dec 03 21:42:13 crc kubenswrapper[4765]: I1203 21:42:13.765740 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:42:14 crc kubenswrapper[4765]: I1203 21:42:14.493218 4765 generic.go:334] "Generic (PLEG): container finished" podID="56245235-eef6-472d-b481-1b9d7f80b89c" containerID="61bb81993038e3919f5e7c8450fc404ec532dc84a3f51032bc2353961b040fdc" exitCode=1 Dec 03 21:42:14 crc kubenswrapper[4765]: I1203 21:42:14.493288 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" event={"ID":"56245235-eef6-472d-b481-1b9d7f80b89c","Type":"ContainerDied","Data":"61bb81993038e3919f5e7c8450fc404ec532dc84a3f51032bc2353961b040fdc"} Dec 03 21:42:14 crc kubenswrapper[4765]: I1203 21:42:14.494621 4765 scope.go:117] "RemoveContainer" containerID="61bb81993038e3919f5e7c8450fc404ec532dc84a3f51032bc2353961b040fdc" Dec 03 21:42:14 crc kubenswrapper[4765]: I1203 21:42:14.498753 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerID="99e4685f5bcbb791a7206589f2cc4445e69d2424a7c154321180f4ae75a5abfd" exitCode=0 Dec 03 21:42:14 crc kubenswrapper[4765]: I1203 21:42:14.498796 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerDied","Data":"99e4685f5bcbb791a7206589f2cc4445e69d2424a7c154321180f4ae75a5abfd"} Dec 03 21:42:15 crc kubenswrapper[4765]: I1203 21:42:15.516177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4c7f313-908a-4e2c-a5a0-3b1626d6e188","Type":"ContainerStarted","Data":"4134e6b70000074ff3e1e7bbed4f622e42461c42622195793e1058f8621164aa"} Dec 03 21:42:15 crc kubenswrapper[4765]: I1203 21:42:15.519452 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" event={"ID":"56245235-eef6-472d-b481-1b9d7f80b89c","Type":"ContainerStarted","Data":"502204e4f1743e279ad34df9f0b848ff3101798f7e4c9ae21ad60e7191307f50"} Dec 03 21:42:15 crc kubenswrapper[4765]: I1203 21:42:15.520283 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 21:42:24 crc kubenswrapper[4765]: I1203 21:42:24.798640 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:42:24 crc kubenswrapper[4765]: I1203 21:42:24.799848 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:42:24 crc kubenswrapper[4765]: I1203 21:42:24.799956 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:42:24 crc kubenswrapper[4765]: I1203 21:42:24.801839 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:42:24 crc kubenswrapper[4765]: I1203 21:42:24.802034 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" gracePeriod=600 Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.961301 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:42:25 crc kubenswrapper[4765]: E1203 21:42:25.962107 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="copy" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962124 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="copy" Dec 03 21:42:25 crc kubenswrapper[4765]: E1203 21:42:25.962155 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="extract-content" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962163 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="extract-content" Dec 03 21:42:25 crc kubenswrapper[4765]: E1203 21:42:25.962179 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="extract-utilities" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962190 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="extract-utilities" Dec 03 21:42:25 crc kubenswrapper[4765]: E1203 21:42:25.962212 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="registry-server" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962220 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="registry-server" Dec 03 21:42:25 crc kubenswrapper[4765]: E1203 21:42:25.962239 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="gather" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962247 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="gather" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962510 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefefd88-637c-4bec-b765-5a69cd6bfb2a" containerName="registry-server" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962563 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="copy" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.962581 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc309a07-23e2-493e-870b-d3fa60428deb" containerName="gather" Dec 03 21:42:25 crc kubenswrapper[4765]: I1203 21:42:25.965430 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:25.976034 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.099976 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxg2n\" (UniqueName: \"kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.100082 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.100215 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.202678 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxg2n\" (UniqueName: \"kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.202740 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.202785 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.203256 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.203735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.243528 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxg2n\" (UniqueName: \"kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n\") pod \"certified-operators-rnnl6\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.453826 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:42:26 crc kubenswrapper[4765]: I1203 21:42:26.954701 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:42:27 crc kubenswrapper[4765]: I1203 21:42:27.639555 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerStarted","Data":"7a243293a243e8116625be97734e656a3ae00d5083ce4221b36bfbd4716b9973"} Dec 03 21:42:33 crc kubenswrapper[4765]: I1203 21:42:33.407106 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-swqqp_f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5/machine-config-daemon/13.log" Dec 03 21:42:33 crc kubenswrapper[4765]: I1203 21:42:33.409200 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" exitCode=-1 Dec 03 21:42:33 crc kubenswrapper[4765]: I1203 21:42:33.409270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca"} Dec 03 21:42:33 crc kubenswrapper[4765]: I1203 21:42:33.409345 4765 scope.go:117] "RemoveContainer" containerID="240deb353c7074d110d2634b5f89a9616ad676fdf257d58d9d9803b7423e1b1d" Dec 03 21:42:52 crc kubenswrapper[4765]: I1203 21:42:52.982520 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-869886bfd4-t75fk" Dec 03 21:43:00 crc kubenswrapper[4765]: I1203 21:43:00.734463 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f4c7f313-908a-4e2c-a5a0-3b1626d6e188" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 03 21:43:05 crc kubenswrapper[4765]: E1203 21:43:05.304246 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:43:05 crc kubenswrapper[4765]: I1203 21:43:05.799550 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerID="397d8e9eb1f3651983463d37b0f7c1cf22fd00b2f51ef42d5203c2d445271fca" exitCode=0 Dec 03 21:43:05 crc kubenswrapper[4765]: I1203 21:43:05.799638 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerDied","Data":"397d8e9eb1f3651983463d37b0f7c1cf22fd00b2f51ef42d5203c2d445271fca"} Dec 03 21:43:05 crc kubenswrapper[4765]: I1203 21:43:05.807501 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:43:05 crc kubenswrapper[4765]: E1203 21:43:05.807972 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:43:07 crc kubenswrapper[4765]: I1203 21:43:07.831476 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerID="44643b6e864e55d1c57b00de2fe305a7d705211dc0f3dc83853979038984061d" exitCode=0 Dec 03 21:43:07 crc kubenswrapper[4765]: I1203 21:43:07.831612 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerDied","Data":"44643b6e864e55d1c57b00de2fe305a7d705211dc0f3dc83853979038984061d"} Dec 03 21:43:08 crc kubenswrapper[4765]: I1203 21:43:08.842863 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerStarted","Data":"cb12c803a1ca5eca5afaaf706beadd251f79d5ca477d285d71ebcbc0035e8047"} Dec 03 21:43:08 crc kubenswrapper[4765]: I1203 21:43:08.873139 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnnl6" podStartSLOduration=41.442340965 podStartE2EDuration="43.873121599s" podCreationTimestamp="2025-12-03 21:42:25 +0000 UTC" firstStartedPulling="2025-12-03 21:43:05.806532054 +0000 UTC m=+3883.737077205" lastFinishedPulling="2025-12-03 21:43:08.237312658 +0000 UTC m=+3886.167857839" observedRunningTime="2025-12-03 21:43:08.867787325 +0000 UTC m=+3886.798332486" watchObservedRunningTime="2025-12-03 21:43:08.873121599 +0000 UTC m=+3886.803666760" Dec 03 21:43:16 crc kubenswrapper[4765]: I1203 21:43:16.454257 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:16 crc kubenswrapper[4765]: I1203 21:43:16.455281 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:16 crc kubenswrapper[4765]: I1203 21:43:16.521899 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:17 crc kubenswrapper[4765]: I1203 21:43:17.000905 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:17 crc kubenswrapper[4765]: I1203 21:43:17.065889 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:43:18 crc kubenswrapper[4765]: I1203 21:43:18.939803 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnnl6" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="registry-server" containerID="cri-o://cb12c803a1ca5eca5afaaf706beadd251f79d5ca477d285d71ebcbc0035e8047" gracePeriod=2 Dec 03 21:43:19 crc kubenswrapper[4765]: I1203 21:43:19.953771 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerID="cb12c803a1ca5eca5afaaf706beadd251f79d5ca477d285d71ebcbc0035e8047" exitCode=0 Dec 03 21:43:19 crc kubenswrapper[4765]: I1203 21:43:19.954446 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerDied","Data":"cb12c803a1ca5eca5afaaf706beadd251f79d5ca477d285d71ebcbc0035e8047"} Dec 03 21:43:19 crc kubenswrapper[4765]: I1203 21:43:19.954485 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnl6" event={"ID":"d9ac4076-5486-4b9b-9b8c-5d399ba4223e","Type":"ContainerDied","Data":"7a243293a243e8116625be97734e656a3ae00d5083ce4221b36bfbd4716b9973"} Dec 03 21:43:19 crc kubenswrapper[4765]: I1203 21:43:19.954527 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a243293a243e8116625be97734e656a3ae00d5083ce4221b36bfbd4716b9973" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.024454 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.080109 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxg2n\" (UniqueName: \"kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n\") pod \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.080206 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content\") pod \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.080356 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities\") pod \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\" (UID: \"d9ac4076-5486-4b9b-9b8c-5d399ba4223e\") " Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.081690 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities" (OuterVolumeSpecName: "utilities") pod "d9ac4076-5486-4b9b-9b8c-5d399ba4223e" (UID: "d9ac4076-5486-4b9b-9b8c-5d399ba4223e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.095391 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n" (OuterVolumeSpecName: "kube-api-access-kxg2n") pod "d9ac4076-5486-4b9b-9b8c-5d399ba4223e" (UID: "d9ac4076-5486-4b9b-9b8c-5d399ba4223e"). InnerVolumeSpecName "kube-api-access-kxg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.135518 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9ac4076-5486-4b9b-9b8c-5d399ba4223e" (UID: "d9ac4076-5486-4b9b-9b8c-5d399ba4223e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.183400 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxg2n\" (UniqueName: \"kubernetes.io/projected/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-kube-api-access-kxg2n\") on node \"crc\" DevicePath \"\"" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.183454 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.183476 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9ac4076-5486-4b9b-9b8c-5d399ba4223e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.360924 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:43:20 crc kubenswrapper[4765]: E1203 21:43:20.361408 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:43:20 crc kubenswrapper[4765]: I1203 21:43:20.964939 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnl6" Dec 03 21:43:21 crc kubenswrapper[4765]: I1203 21:43:21.003122 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:43:21 crc kubenswrapper[4765]: I1203 21:43:21.013973 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnnl6"] Dec 03 21:43:22 crc kubenswrapper[4765]: I1203 21:43:22.390364 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" path="/var/lib/kubelet/pods/d9ac4076-5486-4b9b-9b8c-5d399ba4223e/volumes" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.096711 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfd6x/must-gather-4hw8w"] Dec 03 21:43:23 crc kubenswrapper[4765]: E1203 21:43:23.097167 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="registry-server" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.097196 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="registry-server" Dec 03 21:43:23 crc kubenswrapper[4765]: E1203 21:43:23.097229 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="extract-utilities" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.097238 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="extract-utilities" Dec 03 21:43:23 crc kubenswrapper[4765]: E1203 21:43:23.097279 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="extract-content" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.097288 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="extract-content" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.097514 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ac4076-5486-4b9b-9b8c-5d399ba4223e" containerName="registry-server" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.098787 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.100974 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfd6x"/"openshift-service-ca.crt" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.101160 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pfd6x"/"default-dockercfg-fsbgv" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.101328 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfd6x"/"kube-root-ca.crt" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.127207 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfd6x/must-gather-4hw8w"] Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.148710 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqdn\" (UniqueName: \"kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.149065 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.251546 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqdn\" (UniqueName: \"kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.251628 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.252212 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.275122 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqdn\" (UniqueName: \"kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn\") pod \"must-gather-4hw8w\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.467625 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:43:23 crc kubenswrapper[4765]: I1203 21:43:23.751057 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfd6x/must-gather-4hw8w"] Dec 03 21:43:25 crc kubenswrapper[4765]: I1203 21:43:25.008761 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" event={"ID":"948659df-cbfd-48fb-8ee2-d68fcd7fb58a","Type":"ContainerStarted","Data":"ba71757afbdd3207104f3eed8717b47daf75194d943f4fd218139032f411261c"} Dec 03 21:43:25 crc kubenswrapper[4765]: I1203 21:43:25.009358 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" event={"ID":"948659df-cbfd-48fb-8ee2-d68fcd7fb58a","Type":"ContainerStarted","Data":"00834273d96d24236cb54e942907c8f73f5c749a1652d0967412f84e5db1537e"} Dec 03 21:43:25 crc kubenswrapper[4765]: I1203 21:43:25.009373 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" event={"ID":"948659df-cbfd-48fb-8ee2-d68fcd7fb58a","Type":"ContainerStarted","Data":"2954ae2815358d6e82df5cd8d253cbf805541117a62c39a1935c947c351c10d1"} Dec 03 21:43:25 crc kubenswrapper[4765]: I1203 21:43:25.032246 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" podStartSLOduration=2.032227057 podStartE2EDuration="2.032227057s" podCreationTimestamp="2025-12-03 21:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:43:25.024570322 +0000 UTC m=+3902.955115503" watchObservedRunningTime="2025-12-03 21:43:25.032227057 +0000 UTC m=+3902.962772208" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.313584 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-27vtb"] Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.316556 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.368382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j6q\" (UniqueName: \"kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.368809 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.470877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j6q\" (UniqueName: \"kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.470980 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.471124 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.491196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j6q\" (UniqueName: \"kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q\") pod \"crc-debug-27vtb\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: I1203 21:43:28.640423 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:43:28 crc kubenswrapper[4765]: W1203 21:43:28.674870 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76d7b38b_7075_40f4_848a_538f68ace49d.slice/crio-5202b2fd58a5c29d9dbee6b3369c297fa0ee614c9e7bd419abbb56298155c4cb WatchSource:0}: Error finding container 5202b2fd58a5c29d9dbee6b3369c297fa0ee614c9e7bd419abbb56298155c4cb: Status 404 returned error can't find the container with id 5202b2fd58a5c29d9dbee6b3369c297fa0ee614c9e7bd419abbb56298155c4cb Dec 03 21:43:29 crc kubenswrapper[4765]: I1203 21:43:29.045102 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" event={"ID":"76d7b38b-7075-40f4-848a-538f68ace49d","Type":"ContainerStarted","Data":"446fdaa6190632a4478e981155563e2b54151103ceb58084cec352ae3779f1f0"} Dec 03 21:43:29 crc kubenswrapper[4765]: I1203 21:43:29.045354 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" event={"ID":"76d7b38b-7075-40f4-848a-538f68ace49d","Type":"ContainerStarted","Data":"5202b2fd58a5c29d9dbee6b3369c297fa0ee614c9e7bd419abbb56298155c4cb"} Dec 03 21:43:29 crc kubenswrapper[4765]: I1203 21:43:29.059077 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" podStartSLOduration=1.059058662 podStartE2EDuration="1.059058662s" podCreationTimestamp="2025-12-03 21:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:43:29.057394947 +0000 UTC m=+3906.987940118" watchObservedRunningTime="2025-12-03 21:43:29.059058662 +0000 UTC m=+3906.989603823" Dec 03 21:43:32 crc kubenswrapper[4765]: I1203 21:43:32.369038 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:43:32 crc kubenswrapper[4765]: E1203 21:43:32.369761 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:43:47 crc kubenswrapper[4765]: I1203 21:43:47.360450 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:43:47 crc kubenswrapper[4765]: E1203 21:43:47.363007 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:43:59 crc kubenswrapper[4765]: I1203 21:43:59.348633 4765 generic.go:334] "Generic (PLEG): container finished" podID="76d7b38b-7075-40f4-848a-538f68ace49d" containerID="446fdaa6190632a4478e981155563e2b54151103ceb58084cec352ae3779f1f0" exitCode=0 Dec 03 21:43:59 crc kubenswrapper[4765]: I1203 21:43:59.349201 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" event={"ID":"76d7b38b-7075-40f4-848a-538f68ace49d","Type":"ContainerDied","Data":"446fdaa6190632a4478e981155563e2b54151103ceb58084cec352ae3779f1f0"} Dec 03 21:43:59 crc kubenswrapper[4765]: I1203 21:43:59.360901 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:43:59 crc kubenswrapper[4765]: E1203 21:43:59.361108 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.465138 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.508241 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-27vtb"] Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.517758 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-27vtb"] Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.659992 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7j6q\" (UniqueName: \"kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q\") pod \"76d7b38b-7075-40f4-848a-538f68ace49d\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.660134 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host\") pod \"76d7b38b-7075-40f4-848a-538f68ace49d\" (UID: \"76d7b38b-7075-40f4-848a-538f68ace49d\") " Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.660626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host" (OuterVolumeSpecName: "host") pod "76d7b38b-7075-40f4-848a-538f68ace49d" (UID: "76d7b38b-7075-40f4-848a-538f68ace49d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.668481 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q" (OuterVolumeSpecName: "kube-api-access-l7j6q") pod "76d7b38b-7075-40f4-848a-538f68ace49d" (UID: "76d7b38b-7075-40f4-848a-538f68ace49d"). InnerVolumeSpecName "kube-api-access-l7j6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.762124 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7j6q\" (UniqueName: \"kubernetes.io/projected/76d7b38b-7075-40f4-848a-538f68ace49d-kube-api-access-l7j6q\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:00 crc kubenswrapper[4765]: I1203 21:44:00.762154 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76d7b38b-7075-40f4-848a-538f68ace49d-host\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.373936 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5202b2fd58a5c29d9dbee6b3369c297fa0ee614c9e7bd419abbb56298155c4cb" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.374027 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-27vtb" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.695647 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-l4zks"] Dec 03 21:44:01 crc kubenswrapper[4765]: E1203 21:44:01.696085 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7b38b-7075-40f4-848a-538f68ace49d" containerName="container-00" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.696100 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7b38b-7075-40f4-848a-538f68ace49d" containerName="container-00" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.696381 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7b38b-7075-40f4-848a-538f68ace49d" containerName="container-00" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.697286 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.779448 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.779535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnk8n\" (UniqueName: \"kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.882132 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.882329 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.882380 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnk8n\" (UniqueName: \"kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:01 crc kubenswrapper[4765]: I1203 21:44:01.900028 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnk8n\" (UniqueName: \"kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n\") pod \"crc-debug-l4zks\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:02 crc kubenswrapper[4765]: I1203 21:44:02.015409 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:02 crc kubenswrapper[4765]: I1203 21:44:02.371149 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7b38b-7075-40f4-848a-538f68ace49d" path="/var/lib/kubelet/pods/76d7b38b-7075-40f4-848a-538f68ace49d/volumes" Dec 03 21:44:02 crc kubenswrapper[4765]: I1203 21:44:02.384893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" event={"ID":"a194ef52-a849-4f58-8dcd-5f6b27679cc3","Type":"ContainerStarted","Data":"c8f33190d190c246802102b388db12afa79a2dcd6769b3f82106beb0e15e769e"} Dec 03 21:44:02 crc kubenswrapper[4765]: I1203 21:44:02.384941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" event={"ID":"a194ef52-a849-4f58-8dcd-5f6b27679cc3","Type":"ContainerStarted","Data":"f8d0c68dd6ff3109b56fb9be9842f020c7e97501d40f5b8ecc29ca5a7cda2cf0"} Dec 03 21:44:02 crc kubenswrapper[4765]: I1203 21:44:02.397984 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" podStartSLOduration=1.39797055 podStartE2EDuration="1.39797055s" podCreationTimestamp="2025-12-03 21:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 21:44:02.396265354 +0000 UTC m=+3940.326810505" watchObservedRunningTime="2025-12-03 21:44:02.39797055 +0000 UTC m=+3940.328515701" Dec 03 21:44:03 crc kubenswrapper[4765]: I1203 21:44:03.393130 4765 generic.go:334] "Generic (PLEG): container finished" podID="a194ef52-a849-4f58-8dcd-5f6b27679cc3" containerID="c8f33190d190c246802102b388db12afa79a2dcd6769b3f82106beb0e15e769e" exitCode=0 Dec 03 21:44:03 crc kubenswrapper[4765]: I1203 21:44:03.393202 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" event={"ID":"a194ef52-a849-4f58-8dcd-5f6b27679cc3","Type":"ContainerDied","Data":"c8f33190d190c246802102b388db12afa79a2dcd6769b3f82106beb0e15e769e"} Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.495403 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.534863 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-l4zks"] Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.545960 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-l4zks"] Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.628187 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host\") pod \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.628245 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnk8n\" (UniqueName: \"kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n\") pod \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\" (UID: \"a194ef52-a849-4f58-8dcd-5f6b27679cc3\") " Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.628319 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host" (OuterVolumeSpecName: "host") pod "a194ef52-a849-4f58-8dcd-5f6b27679cc3" (UID: "a194ef52-a849-4f58-8dcd-5f6b27679cc3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.628977 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a194ef52-a849-4f58-8dcd-5f6b27679cc3-host\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.636496 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n" (OuterVolumeSpecName: "kube-api-access-qnk8n") pod "a194ef52-a849-4f58-8dcd-5f6b27679cc3" (UID: "a194ef52-a849-4f58-8dcd-5f6b27679cc3"). InnerVolumeSpecName "kube-api-access-qnk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:44:04 crc kubenswrapper[4765]: I1203 21:44:04.730559 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnk8n\" (UniqueName: \"kubernetes.io/projected/a194ef52-a849-4f58-8dcd-5f6b27679cc3-kube-api-access-qnk8n\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.411398 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d0c68dd6ff3109b56fb9be9842f020c7e97501d40f5b8ecc29ca5a7cda2cf0" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.411806 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-l4zks" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.748061 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-mwd9q"] Dec 03 21:44:05 crc kubenswrapper[4765]: E1203 21:44:05.749089 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a194ef52-a849-4f58-8dcd-5f6b27679cc3" containerName="container-00" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.749158 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a194ef52-a849-4f58-8dcd-5f6b27679cc3" containerName="container-00" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.749416 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a194ef52-a849-4f58-8dcd-5f6b27679cc3" containerName="container-00" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.750061 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.848502 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5q2\" (UniqueName: \"kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.848572 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.951197 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5q2\" (UniqueName: \"kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.951345 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.951289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:05 crc kubenswrapper[4765]: I1203 21:44:05.974429 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5q2\" (UniqueName: \"kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2\") pod \"crc-debug-mwd9q\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.067868 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.373666 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a194ef52-a849-4f58-8dcd-5f6b27679cc3" path="/var/lib/kubelet/pods/a194ef52-a849-4f58-8dcd-5f6b27679cc3/volumes" Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.426736 4765 generic.go:334] "Generic (PLEG): container finished" podID="0f4f1715-075b-4a97-bd3a-f832ef81a455" containerID="865cdc64795e9957a0ba45d0e8b89e012be4ce938f45e4c8b2e4069801e96173" exitCode=0 Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.426799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" event={"ID":"0f4f1715-075b-4a97-bd3a-f832ef81a455","Type":"ContainerDied","Data":"865cdc64795e9957a0ba45d0e8b89e012be4ce938f45e4c8b2e4069801e96173"} Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.426837 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" event={"ID":"0f4f1715-075b-4a97-bd3a-f832ef81a455","Type":"ContainerStarted","Data":"d40582957f2ae8d0fb8187f748251ae838fc201e0e240586444568340a375548"} Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.463892 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-mwd9q"] Dec 03 21:44:06 crc kubenswrapper[4765]: I1203 21:44:06.471812 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfd6x/crc-debug-mwd9q"] Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.539715 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.684866 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5q2\" (UniqueName: \"kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2\") pod \"0f4f1715-075b-4a97-bd3a-f832ef81a455\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.685072 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host\") pod \"0f4f1715-075b-4a97-bd3a-f832ef81a455\" (UID: \"0f4f1715-075b-4a97-bd3a-f832ef81a455\") " Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.685516 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host" (OuterVolumeSpecName: "host") pod "0f4f1715-075b-4a97-bd3a-f832ef81a455" (UID: "0f4f1715-075b-4a97-bd3a-f832ef81a455"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.693063 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2" (OuterVolumeSpecName: "kube-api-access-qm5q2") pod "0f4f1715-075b-4a97-bd3a-f832ef81a455" (UID: "0f4f1715-075b-4a97-bd3a-f832ef81a455"). InnerVolumeSpecName "kube-api-access-qm5q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.787102 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f4f1715-075b-4a97-bd3a-f832ef81a455-host\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:07 crc kubenswrapper[4765]: I1203 21:44:07.787156 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5q2\" (UniqueName: \"kubernetes.io/projected/0f4f1715-075b-4a97-bd3a-f832ef81a455-kube-api-access-qm5q2\") on node \"crc\" DevicePath \"\"" Dec 03 21:44:08 crc kubenswrapper[4765]: I1203 21:44:08.368317 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4f1715-075b-4a97-bd3a-f832ef81a455" path="/var/lib/kubelet/pods/0f4f1715-075b-4a97-bd3a-f832ef81a455/volumes" Dec 03 21:44:08 crc kubenswrapper[4765]: I1203 21:44:08.442965 4765 scope.go:117] "RemoveContainer" containerID="865cdc64795e9957a0ba45d0e8b89e012be4ce938f45e4c8b2e4069801e96173" Dec 03 21:44:08 crc kubenswrapper[4765]: I1203 21:44:08.443011 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/crc-debug-mwd9q" Dec 03 21:44:14 crc kubenswrapper[4765]: I1203 21:44:14.364097 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:44:14 crc kubenswrapper[4765]: E1203 21:44:14.364854 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:44:25 crc kubenswrapper[4765]: I1203 21:44:25.360159 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:44:25 crc kubenswrapper[4765]: E1203 21:44:25.361364 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:44:40 crc kubenswrapper[4765]: I1203 21:44:40.359814 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:44:40 crc kubenswrapper[4765]: E1203 21:44:40.360512 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:44:55 crc kubenswrapper[4765]: I1203 21:44:55.360494 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:44:55 crc kubenswrapper[4765]: E1203 21:44:55.361207 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.221567 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk"] Dec 03 21:45:00 crc kubenswrapper[4765]: E1203 21:45:00.222623 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4f1715-075b-4a97-bd3a-f832ef81a455" containerName="container-00" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.222639 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4f1715-075b-4a97-bd3a-f832ef81a455" containerName="container-00" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.222869 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4f1715-075b-4a97-bd3a-f832ef81a455" containerName="container-00" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.223648 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.226371 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.226593 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.233142 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk"] Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.386965 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhcg\" (UniqueName: \"kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.387046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.387371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.488683 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxhcg\" (UniqueName: \"kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.489009 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.489095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.490723 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.497941 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.511915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxhcg\" (UniqueName: \"kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg\") pod \"collect-profiles-29413305-nrcsk\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:00 crc kubenswrapper[4765]: I1203 21:45:00.591206 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:01 crc kubenswrapper[4765]: I1203 21:45:01.084886 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk"] Dec 03 21:45:01 crc kubenswrapper[4765]: W1203 21:45:01.091750 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7318be1c_0956_4304_a697_857f0f95ce7e.slice/crio-67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933 WatchSource:0}: Error finding container 67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933: Status 404 returned error can't find the container with id 67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933 Dec 03 21:45:01 crc kubenswrapper[4765]: I1203 21:45:01.984919 4765 generic.go:334] "Generic (PLEG): container finished" podID="7318be1c-0956-4304-a697-857f0f95ce7e" containerID="34b9bc3491c908be27d5268e7056a4f1b6e27d36171cdf4249133387f88f27c9" exitCode=0 Dec 03 21:45:01 crc kubenswrapper[4765]: I1203 21:45:01.985167 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" event={"ID":"7318be1c-0956-4304-a697-857f0f95ce7e","Type":"ContainerDied","Data":"34b9bc3491c908be27d5268e7056a4f1b6e27d36171cdf4249133387f88f27c9"} Dec 03 21:45:01 crc kubenswrapper[4765]: I1203 21:45:01.985191 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" event={"ID":"7318be1c-0956-4304-a697-857f0f95ce7e","Type":"ContainerStarted","Data":"67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933"} Dec 03 21:45:02 crc kubenswrapper[4765]: I1203 21:45:02.781478 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7455d9cf5d-nflkk_f0226955-fe8e-4128-8c2e-66d0a79ee3ad/barbican-api/0.log" Dec 03 21:45:02 crc kubenswrapper[4765]: I1203 21:45:02.864748 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7455d9cf5d-nflkk_f0226955-fe8e-4128-8c2e-66d0a79ee3ad/barbican-api-log/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.007026 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65ffb6446-hdn74_a90044b4-b1fd-4c11-bb40-b52bf1a912f8/barbican-keystone-listener/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.054479 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65ffb6446-hdn74_a90044b4-b1fd-4c11-bb40-b52bf1a912f8/barbican-keystone-listener-log/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.124647 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbbc67fdf-scz6t_e0aad3bb-6dd6-4673-b738-2f04849106ce/barbican-worker/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.257217 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbbc67fdf-scz6t_e0aad3bb-6dd6-4673-b738-2f04849106ce/barbican-worker-log/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.324108 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nfrkx_e3b2c2f7-5ef3-47e1-bb0e-3298074acb32/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.394924 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.525048 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/ceilometer-central-agent/1.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.547279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume\") pod \"7318be1c-0956-4304-a697-857f0f95ce7e\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.547453 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume\") pod \"7318be1c-0956-4304-a697-857f0f95ce7e\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.547535 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxhcg\" (UniqueName: \"kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg\") pod \"7318be1c-0956-4304-a697-857f0f95ce7e\" (UID: \"7318be1c-0956-4304-a697-857f0f95ce7e\") " Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.548120 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7318be1c-0956-4304-a697-857f0f95ce7e" (UID: "7318be1c-0956-4304-a697-857f0f95ce7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.557886 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg" (OuterVolumeSpecName: "kube-api-access-bxhcg") pod "7318be1c-0956-4304-a697-857f0f95ce7e" (UID: "7318be1c-0956-4304-a697-857f0f95ce7e"). InnerVolumeSpecName "kube-api-access-bxhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.565606 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7318be1c-0956-4304-a697-857f0f95ce7e" (UID: "7318be1c-0956-4304-a697-857f0f95ce7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.579770 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/ceilometer-notification-agent/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.580712 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/ceilometer-central-agent/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.649694 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7318be1c-0956-4304-a697-857f0f95ce7e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.649733 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7318be1c-0956-4304-a697-857f0f95ce7e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.649745 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxhcg\" (UniqueName: \"kubernetes.io/projected/7318be1c-0956-4304-a697-857f0f95ce7e-kube-api-access-bxhcg\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.658500 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/proxy-httpd/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.715991 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f4c7f313-908a-4e2c-a5a0-3b1626d6e188/sg-core/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.802120 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-mtrhx_d5b77ee4-d4b7-48a2-993b-c7e911e88b0d/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:03 crc kubenswrapper[4765]: I1203 21:45:03.912439 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-n5wl4_f56cb10b-3bc1-42b6-90e6-8d1802c20167/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.001019 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" event={"ID":"7318be1c-0956-4304-a697-857f0f95ce7e","Type":"ContainerDied","Data":"67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933"} Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.001061 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d1c732be92141fb6938d7be03648e7a6093aea120d844cb6a7deece62b2933" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.001312 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413305-nrcsk" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.052522 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b340b625-0c86-49d6-8e7f-2bbfa3ab71d7/cinder-api/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.099953 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b340b625-0c86-49d6-8e7f-2bbfa3ab71d7/cinder-api-log/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.241362 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a6827558-2402-4d4f-b230-eb41101a3c41/probe/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.410790 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd57395a-abd0-4768-b1e9-0cdf5a9930d3/cinder-scheduler/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.473985 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l"] Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.482419 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413260-9c88l"] Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.487981 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a6827558-2402-4d4f-b230-eb41101a3c41/cinder-backup/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.520519 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd57395a-abd0-4768-b1e9-0cdf5a9930d3/probe/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.706910 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_090dfe86-44b6-4444-9075-abfc758bc2e4/cinder-volume/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.732978 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_090dfe86-44b6-4444-9075-abfc758bc2e4/probe/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.842021 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4fxm7_c766674f-ed9a-4a8c-8c83-a94542469c60/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:04 crc kubenswrapper[4765]: I1203 21:45:04.937340 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-45dqw_0acda383-efb7-45c7-8ead-19f3bb2bac36/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.023947 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/init/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.195223 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/init/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.271911 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hv49c_bcfd57de-3b61-4a34-a4a3-c7808baedc2d/dnsmasq-dns/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.278253 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43154ec4-ba15-4d12-afeb-a3528c1269c8/glance-httpd/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.413663 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43154ec4-ba15-4d12-afeb-a3528c1269c8/glance-log/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.493735 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d2d1fba0-111f-49ed-9992-e75c8f53d277/glance-httpd/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.527440 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d2d1fba0-111f-49ed-9992-e75c8f53d277/glance-log/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.751449 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-754897654-c5z9l_742566d1-3d02-42ea-8db1-e482ff699ada/horizon/0.log" Dec 03 21:45:05 crc kubenswrapper[4765]: I1203 21:45:05.959511 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7zb8r_51f7c3b1-f566-4371-ad1d-487bbfa1be12/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.041863 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p2brn_b6cce00b-a9f8-4d5e-abbf-0e72ce498b52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.071183 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-754897654-c5z9l_742566d1-3d02-42ea-8db1-e482ff699ada/horizon-log/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.242160 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413261-sllgz_d52a513d-e85f-4c95-9188-8748e9f08c2b/keystone-cron/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.246961 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-57c6f94f6-xmzln_4e9b168b-07ea-4870-ba96-9680c4530133/keystone-api/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.265622 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5d1a0b5d-2754-4bb8-bc2c-e20ba9631e8b/kube-state-metrics/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.371311 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87495622-7d1f-48a8-9007-40e58f936a08" path="/var/lib/kubelet/pods/87495622-7d1f-48a8-9007-40e58f936a08/volumes" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.477818 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7b6lw_52b80dc5-1f1c-44ce-b3f9-a5cb613d3b96/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.549586 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fdd97dcb-bc57-4867-a85d-be547f7b716f/manila-api-log/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.599874 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fdd97dcb-bc57-4867-a85d-be547f7b716f/manila-api/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.671624 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8dd498cd-6ec2-4d8f-ad18-72aae897e33e/probe/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.719927 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8dd498cd-6ec2-4d8f-ad18-72aae897e33e/manila-scheduler/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.819104 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7d62facf-5ee9-45cf-a031-15834157a662/manila-share/0.log" Dec 03 21:45:06 crc kubenswrapper[4765]: I1203 21:45:06.844144 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_7d62facf-5ee9-45cf-a031-15834157a662/probe/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.052019 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8458f9f649-c6lrl_1f78f95a-adb3-4939-a8f0-3fdd4d3757da/neutron-api/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.062646 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8458f9f649-c6lrl_1f78f95a-adb3-4939-a8f0-3fdd4d3757da/neutron-httpd/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.181270 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-knsd7_360b19b3-c391-467e-ab4c-f7cb150873ea/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.359329 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:45:07 crc kubenswrapper[4765]: E1203 21:45:07.359625 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.621868 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3f90377-318a-4a36-a187-62434c1fb8c3/nova-api-log/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.731138 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fae73c00-5619-4157-9bf2-4996314616aa/nova-cell0-conductor-conductor/0.log" Dec 03 21:45:07 crc kubenswrapper[4765]: I1203 21:45:07.878868 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3f90377-318a-4a36-a187-62434c1fb8c3/nova-api-api/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.007048 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7c9a4172-479b-4188-9264-208492b2be91/nova-cell1-conductor-conductor/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.077193 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c40d979f-5978-45a1-9b88-b4587eb142c2/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.162944 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ffa82a93-b10c-4414-be93-7d003c7917e9/memcached/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.192818 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-6z6xf_d13320a0-48f4-4813-9692-9554f411d998/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.504360 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c11f148f-d7db-4776-a326-cb655caf8b19/nova-metadata-log/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.665194 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:08 crc kubenswrapper[4765]: E1203 21:45:08.665613 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7318be1c-0956-4304-a697-857f0f95ce7e" containerName="collect-profiles" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.665629 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7318be1c-0956-4304-a697-857f0f95ce7e" containerName="collect-profiles" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.665812 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7318be1c-0956-4304-a697-857f0f95ce7e" containerName="collect-profiles" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.667013 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.686569 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6963989c-bc38-471a-a22a-c7e90de20bf9/nova-scheduler-scheduler/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.688365 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.740488 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.740567 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.740608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdjh\" (UniqueName: \"kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.762618 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/mysql-bootstrap/0.log" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.842343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.842403 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdjh\" (UniqueName: \"kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.842531 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.842940 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.843151 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.862489 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdjh\" (UniqueName: \"kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh\") pod \"redhat-marketplace-pt9f6\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:08 crc kubenswrapper[4765]: I1203 21:45:08.990162 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.049199 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/mysql-bootstrap/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.051547 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/mysql-bootstrap/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.081864 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71fd51b3-7a6c-4d2a-a39a-93ebcd06da7f/galera/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.366134 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c11f148f-d7db-4776-a326-cb655caf8b19/nova-metadata-metadata/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.394325 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/mysql-bootstrap/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.440981 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.446029 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1d3f1a32-afd2-49fc-b9cd-b49f14770ab2/galera/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.478669 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_bd00d94a-54ce-420e-959d-4b10ecce11d0/openstackclient/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.598129 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f85pk_2a9aeba1-759a-41ad-a871-5cfa33de5aae/ovn-controller/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.672155 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4b7vx_70312ced-15b1-4366-aa36-c32538b61141/openstack-network-exporter/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.776745 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server-init/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.927398 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server-init/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.969396 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovsdb-server/0.log" Dec 03 21:45:09 crc kubenswrapper[4765]: I1203 21:45:09.970899 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wbnps_f08ba0a5-f646-4b38-a53e-687a78bc572e/ovs-vswitchd/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.033573 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nbhcq_acf5a824-dd5c-412f-a7b2-848352ec8eaa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.075078 4765 generic.go:334] "Generic (PLEG): container finished" podID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerID="0c4c016d395d1e3e495d2f2a4e035e6a593bef76f883f1aeb70935a696e05a0b" exitCode=0 Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.075177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerDied","Data":"0c4c016d395d1e3e495d2f2a4e035e6a593bef76f883f1aeb70935a696e05a0b"} Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.075395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerStarted","Data":"d56d36180bd89df2a599d252908c222824256a07f7a0388493fc7bc2bc0d59b4"} Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.141088 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_403709eb-a3d4-4e89-ac92-de401056e3d0/openstack-network-exporter/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.167832 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_403709eb-a3d4-4e89-ac92-de401056e3d0/ovn-northd/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.211512 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ae88784b-a398-447a-aaba-b2c2e1c7dc48/openstack-network-exporter/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.361853 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ae88784b-a398-447a-aaba-b2c2e1c7dc48/ovsdbserver-nb/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.392160 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba74cb76-f80f-4396-9ddb-1eeec6c21fd6/openstack-network-exporter/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.448142 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba74cb76-f80f-4396-9ddb-1eeec6c21fd6/ovsdbserver-sb/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.665487 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c98cc54bd-jknm9_e714435f-b27b-485e-82cb-4cd1f1491cac/placement-api/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.668115 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c98cc54bd-jknm9_e714435f-b27b-485e-82cb-4cd1f1491cac/placement-log/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.668662 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/setup-container/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.856694 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/setup-container/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.866535 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/setup-container/0.log" Dec 03 21:45:10 crc kubenswrapper[4765]: I1203 21:45:10.895544 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0d9b22a-4baf-4947-bbba-e158c4e554e5/rabbitmq/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.076648 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/rabbitmq/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.128219 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0be5953f-7d37-4d82-8ea7-3cff10d763c1/setup-container/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.153113 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-p65jk_405fb54f-da87-4598-8f88-b9cb64799a12/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.286448 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7lbt5_47b92082-05ae-430d-bdfd-836be92480a8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.355812 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hg7nd_743d2875-36d7-427b-af2e-c8a8e8d5a81c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.382548 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sxk7z_d1f5d4f1-df58-457b-b56a-64c6cda175a4/ssh-known-hosts-edpm-deployment/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.523958 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a4425100-38b1-43b3-90ba-8691dcf4d4aa/tempest-tests-tempest-tests-runner/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.553009 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_14c03184-5d99-4a39-99ba-605dd4c44040/test-operator-logs-container/0.log" Dec 03 21:45:11 crc kubenswrapper[4765]: I1203 21:45:11.676420 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7gn2t_db43dd3d-a5b5-4cc3-bfbd-18689908b450/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 21:45:12 crc kubenswrapper[4765]: I1203 21:45:12.091508 4765 generic.go:334] "Generic (PLEG): container finished" podID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerID="3cabf5bd6c6b3e37bf4777a8f6959785c44e6e5c133850ec18ff8aff6450a335" exitCode=0 Dec 03 21:45:12 crc kubenswrapper[4765]: I1203 21:45:12.091548 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerDied","Data":"3cabf5bd6c6b3e37bf4777a8f6959785c44e6e5c133850ec18ff8aff6450a335"} Dec 03 21:45:13 crc kubenswrapper[4765]: I1203 21:45:13.101273 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerStarted","Data":"ce3e56cbba7fd4d906c10fb625895acb39c0b7ff08cbd6d2c93540a7518cc8f5"} Dec 03 21:45:13 crc kubenswrapper[4765]: I1203 21:45:13.119609 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pt9f6" podStartSLOduration=2.63807119 podStartE2EDuration="5.119592544s" podCreationTimestamp="2025-12-03 21:45:08 +0000 UTC" firstStartedPulling="2025-12-03 21:45:10.076615852 +0000 UTC m=+4008.007161003" lastFinishedPulling="2025-12-03 21:45:12.558137206 +0000 UTC m=+4010.488682357" observedRunningTime="2025-12-03 21:45:13.115061901 +0000 UTC m=+4011.045607042" watchObservedRunningTime="2025-12-03 21:45:13.119592544 +0000 UTC m=+4011.050137705" Dec 03 21:45:18 crc kubenswrapper[4765]: I1203 21:45:18.360798 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:45:18 crc kubenswrapper[4765]: E1203 21:45:18.361478 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:45:18 crc kubenswrapper[4765]: I1203 21:45:18.991182 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:18 crc kubenswrapper[4765]: I1203 21:45:18.991500 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:19 crc kubenswrapper[4765]: I1203 21:45:19.033720 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:19 crc kubenswrapper[4765]: I1203 21:45:19.867852 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:19 crc kubenswrapper[4765]: I1203 21:45:19.909317 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:21 crc kubenswrapper[4765]: I1203 21:45:21.842082 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pt9f6" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="registry-server" containerID="cri-o://ce3e56cbba7fd4d906c10fb625895acb39c0b7ff08cbd6d2c93540a7518cc8f5" gracePeriod=2 Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.851819 4765 generic.go:334] "Generic (PLEG): container finished" podID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerID="ce3e56cbba7fd4d906c10fb625895acb39c0b7ff08cbd6d2c93540a7518cc8f5" exitCode=0 Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.852180 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerDied","Data":"ce3e56cbba7fd4d906c10fb625895acb39c0b7ff08cbd6d2c93540a7518cc8f5"} Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.852213 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pt9f6" event={"ID":"24528a5d-243e-4267-b0c2-c3f9229d6735","Type":"ContainerDied","Data":"d56d36180bd89df2a599d252908c222824256a07f7a0388493fc7bc2bc0d59b4"} Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.852228 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56d36180bd89df2a599d252908c222824256a07f7a0388493fc7bc2bc0d59b4" Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.889420 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.993559 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities\") pod \"24528a5d-243e-4267-b0c2-c3f9229d6735\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.993662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content\") pod \"24528a5d-243e-4267-b0c2-c3f9229d6735\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.993769 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccdjh\" (UniqueName: \"kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh\") pod \"24528a5d-243e-4267-b0c2-c3f9229d6735\" (UID: \"24528a5d-243e-4267-b0c2-c3f9229d6735\") " Dec 03 21:45:22 crc kubenswrapper[4765]: I1203 21:45:22.995128 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities" (OuterVolumeSpecName: "utilities") pod "24528a5d-243e-4267-b0c2-c3f9229d6735" (UID: "24528a5d-243e-4267-b0c2-c3f9229d6735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.006873 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh" (OuterVolumeSpecName: "kube-api-access-ccdjh") pod "24528a5d-243e-4267-b0c2-c3f9229d6735" (UID: "24528a5d-243e-4267-b0c2-c3f9229d6735"). InnerVolumeSpecName "kube-api-access-ccdjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.017959 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24528a5d-243e-4267-b0c2-c3f9229d6735" (UID: "24528a5d-243e-4267-b0c2-c3f9229d6735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.096115 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.096156 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24528a5d-243e-4267-b0c2-c3f9229d6735-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.096170 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccdjh\" (UniqueName: \"kubernetes.io/projected/24528a5d-243e-4267-b0c2-c3f9229d6735-kube-api-access-ccdjh\") on node \"crc\" DevicePath \"\"" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.865739 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pt9f6" Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.911918 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:23 crc kubenswrapper[4765]: I1203 21:45:23.922586 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pt9f6"] Dec 03 21:45:24 crc kubenswrapper[4765]: I1203 21:45:24.372183 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" path="/var/lib/kubelet/pods/24528a5d-243e-4267-b0c2-c3f9229d6735/volumes" Dec 03 21:45:29 crc kubenswrapper[4765]: I1203 21:45:29.386323 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:45:29 crc kubenswrapper[4765]: E1203 21:45:29.386899 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:45:31 crc kubenswrapper[4765]: I1203 21:45:31.835594 4765 scope.go:117] "RemoveContainer" containerID="bdb69c7d0fbd9e9562003d5075c4d09c82f675a5b19548248bbc9542eb763120" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.009510 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ww6rq_d17f6ecc-799c-415b-98e2-67f859a96a1a/kube-rbac-proxy/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.110946 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ww6rq_d17f6ecc-799c-415b-98e2-67f859a96a1a/manager/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.235852 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-mvdp4_e7dd69d2-65b2-4677-b6ac-e90fd4c695c1/kube-rbac-proxy/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.305820 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-mvdp4_e7dd69d2-65b2-4677-b6ac-e90fd4c695c1/manager/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.366431 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.553341 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.577001 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.609145 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.724738 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/pull/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.743434 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/util/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.746017 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dc7edd44e91a02d3bb861a15c0b2725fec07fb4ce2e6d02ddd1065054fwm6h2_b2b9a2d2-8e49-45b0-b855-62ce65981a6c/extract/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.895172 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-czxt5_50b1a98b-3f25-4b3f-9f55-fa99f3911561/kube-rbac-proxy/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.961154 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-czxt5_50b1a98b-3f25-4b3f-9f55-fa99f3911561/manager/0.log" Dec 03 21:45:35 crc kubenswrapper[4765]: I1203 21:45:35.995243 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6d7f88c74f-76fch_84cb39fe-086b-4822-b54f-a5af68d2203c/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.123848 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6d7f88c74f-76fch_84cb39fe-086b-4822-b54f-a5af68d2203c/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.144800 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-m9fpm_48ba0b62-8ac2-4059-ac6a-8643ee1ad149/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.189671 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-m9fpm_48ba0b62-8ac2-4059-ac6a-8643ee1ad149/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.325398 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-9cdp5_797a4394-d04a-491b-8008-819165536dc0/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.333317 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-9cdp5_797a4394-d04a-491b-8008-819165536dc0/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.451609 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7fw8v_6ba1b815-d381-4999-9d4d-9b9b595f6d06/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.604548 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-vvxrw_4527f93e-9514-4750-9f1a-45d2fc649ef2/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.665583 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7fw8v_6ba1b815-d381-4999-9d4d-9b9b595f6d06/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.676860 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-vvxrw_4527f93e-9514-4750-9f1a-45d2fc649ef2/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.807718 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z6lzn_a3cc780d-abf0-4a2b-99c3-67f9602a782f/kube-rbac-proxy/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.886119 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-z6lzn_a3cc780d-abf0-4a2b-99c3-67f9602a782f/manager/0.log" Dec 03 21:45:36 crc kubenswrapper[4765]: I1203 21:45:36.951747 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-tjlbs_65cf60b9-98a5-4fe7-8675-28aadb893c7c/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.019827 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-tjlbs_65cf60b9-98a5-4fe7-8675-28aadb893c7c/manager/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.090616 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-442kz_8d1cf8df-8469-41f4-a801-040210dfbb9f/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.147870 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-442kz_8d1cf8df-8469-41f4-a801-040210dfbb9f/manager/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.231960 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-f4g9d_df89edd4-fc6d-4b27-8947-fbe909852d74/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.304438 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-f4g9d_df89edd4-fc6d-4b27-8947-fbe909852d74/manager/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.384343 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-x2qpv_5f6f097a-e817-4f45-91fd-3c2d9d6b8d52/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.481968 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-x2qpv_5f6f097a-e817-4f45-91fd-3c2d9d6b8d52/manager/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.539230 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bbb8g_bbbe5e38-0e74-426e-9ada-b2d8be5f8444/manager/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.551558 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bbb8g_bbbe5e38-0e74-426e-9ada-b2d8be5f8444/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.655942 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96_5e62f5de-bd17-4c8d-bc3f-0ce237d6e266/kube-rbac-proxy/0.log" Dec 03 21:45:37 crc kubenswrapper[4765]: I1203 21:45:37.703911 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4tnf96_5e62f5de-bd17-4c8d-bc3f-0ce237d6e266/manager/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.018626 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54ccb7f4-f26lq_59d4b087-73be-498b-b8f7-d6b067002ad5/operator/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.082472 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qn2lg_86270547-80b6-44d5-971f-c260b5b7a106/registry-server/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.201189 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-n9556_5a7474c6-a9ec-40ba-8d04-49166a15bab5/kube-rbac-proxy/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.381208 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-n9556_5a7474c6-a9ec-40ba-8d04-49166a15bab5/manager/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.426212 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-dthq2_f1d3e370-5bea-4bc9-9269-7483387b6e31/kube-rbac-proxy/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.478635 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-dthq2_f1d3e370-5bea-4bc9-9269-7483387b6e31/manager/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.647358 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-b8kv2_47ff88bb-97bc-4d0b-a24b-64559741aa30/operator/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.704535 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wmrgj_f0dd713c-31a7-4816-9044-bf59d8931367/kube-rbac-proxy/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.772694 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wmrgj_f0dd713c-31a7-4816-9044-bf59d8931367/manager/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.898707 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-w955s_64675126-66c0-4cac-ad4e-764c10e0c344/kube-rbac-proxy/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.924547 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-547c884594-d98p4_19b04cd5-57c6-4494-a08b-f425c37bf13a/manager/0.log" Dec 03 21:45:38 crc kubenswrapper[4765]: I1203 21:45:38.980817 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-w955s_64675126-66c0-4cac-ad4e-764c10e0c344/manager/0.log" Dec 03 21:45:39 crc kubenswrapper[4765]: I1203 21:45:39.040614 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h7pk2_629580d2-72ea-481f-b78e-e5b6631dfda4/kube-rbac-proxy/0.log" Dec 03 21:45:39 crc kubenswrapper[4765]: I1203 21:45:39.119897 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h7pk2_629580d2-72ea-481f-b78e-e5b6631dfda4/manager/0.log" Dec 03 21:45:39 crc kubenswrapper[4765]: I1203 21:45:39.154198 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-f5s59_016c4fd7-25b8-42b0-ba5d-1008cd28b8b3/kube-rbac-proxy/0.log" Dec 03 21:45:39 crc kubenswrapper[4765]: I1203 21:45:39.202572 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-f5s59_016c4fd7-25b8-42b0-ba5d-1008cd28b8b3/manager/0.log" Dec 03 21:45:41 crc kubenswrapper[4765]: I1203 21:45:41.360118 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:45:41 crc kubenswrapper[4765]: E1203 21:45:41.360847 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:45:55 crc kubenswrapper[4765]: I1203 21:45:55.361074 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:45:55 crc kubenswrapper[4765]: E1203 21:45:55.362379 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:46:00 crc kubenswrapper[4765]: I1203 21:46:00.086252 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qv6v8_bd1c3235-df64-48e7-9c08-e7ee70c8fe49/control-plane-machine-set-operator/0.log" Dec 03 21:46:00 crc kubenswrapper[4765]: I1203 21:46:00.439618 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vj5h7_1033ee94-376d-4190-8e79-ce0d34031aed/kube-rbac-proxy/0.log" Dec 03 21:46:00 crc kubenswrapper[4765]: I1203 21:46:00.484965 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vj5h7_1033ee94-376d-4190-8e79-ce0d34031aed/machine-api-operator/0.log" Dec 03 21:46:09 crc kubenswrapper[4765]: I1203 21:46:09.359357 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:46:09 crc kubenswrapper[4765]: E1203 21:46:09.360116 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:46:15 crc kubenswrapper[4765]: I1203 21:46:15.876942 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-h5t5j_42acdc1c-8668-4544-886a-4346236c7e76/cert-manager-controller/0.log" Dec 03 21:46:16 crc kubenswrapper[4765]: I1203 21:46:16.091402 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vf7sc_aca9fa03-3bb8-4912-aa71-037533fe4b0d/cert-manager-cainjector/0.log" Dec 03 21:46:16 crc kubenswrapper[4765]: I1203 21:46:16.097564 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-twkk9_293b6288-6f0b-4e96-815a-3dffcd7a641c/cert-manager-webhook/0.log" Dec 03 21:46:23 crc kubenswrapper[4765]: I1203 21:46:23.360211 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:46:23 crc kubenswrapper[4765]: E1203 21:46:23.361175 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:46:31 crc kubenswrapper[4765]: I1203 21:46:31.603969 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-fd5mm_a66f7626-aad6-4d61-91e8-b764b50c5e0b/nmstate-console-plugin/0.log" Dec 03 21:46:31 crc kubenswrapper[4765]: I1203 21:46:31.771050 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdsgs_d51bdd1a-e635-4ebb-863b-aaa822deb666/kube-rbac-proxy/0.log" Dec 03 21:46:31 crc kubenswrapper[4765]: I1203 21:46:31.774092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-w2v9s_a026c029-77f9-4020-8c0f-6655cbc1dcb6/nmstate-handler/0.log" Dec 03 21:46:31 crc kubenswrapper[4765]: I1203 21:46:31.819690 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdsgs_d51bdd1a-e635-4ebb-863b-aaa822deb666/nmstate-metrics/0.log" Dec 03 21:46:31 crc kubenswrapper[4765]: I1203 21:46:31.943580 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2dfvx_cd09d44f-8050-4a97-a4e9-73ec54239864/nmstate-operator/0.log" Dec 03 21:46:32 crc kubenswrapper[4765]: I1203 21:46:32.044034 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-jlplw_86af5345-1169-4a47-8f7c-215533b0d752/nmstate-webhook/0.log" Dec 03 21:46:37 crc kubenswrapper[4765]: I1203 21:46:37.360667 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:46:37 crc kubenswrapper[4765]: E1203 21:46:37.361495 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.003493 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:42 crc kubenswrapper[4765]: E1203 21:46:42.005077 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="extract-content" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.005111 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="extract-content" Dec 03 21:46:42 crc kubenswrapper[4765]: E1203 21:46:42.005138 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="extract-utilities" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.005151 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="extract-utilities" Dec 03 21:46:42 crc kubenswrapper[4765]: E1203 21:46:42.005180 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="registry-server" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.005195 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="registry-server" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.005619 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24528a5d-243e-4267-b0c2-c3f9229d6735" containerName="registry-server" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.008762 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.016007 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.133126 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.133214 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x65v\" (UniqueName: \"kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.133345 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.235151 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.235557 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x65v\" (UniqueName: \"kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.235719 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.236020 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.236483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.262290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x65v\" (UniqueName: \"kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v\") pod \"community-operators-nv9wn\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.346171 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:42 crc kubenswrapper[4765]: I1203 21:46:42.908074 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:43 crc kubenswrapper[4765]: I1203 21:46:43.696521 4765 generic.go:334] "Generic (PLEG): container finished" podID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerID="ce38e78d1e967d3873be5455de835e43e102dadf4ed090a334f50cb51478e795" exitCode=0 Dec 03 21:46:43 crc kubenswrapper[4765]: I1203 21:46:43.696629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerDied","Data":"ce38e78d1e967d3873be5455de835e43e102dadf4ed090a334f50cb51478e795"} Dec 03 21:46:43 crc kubenswrapper[4765]: I1203 21:46:43.696862 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerStarted","Data":"f5fcfef2d2b97c979405c36cd670932206be468a07cc7c6c9dce88bf71ddb191"} Dec 03 21:46:45 crc kubenswrapper[4765]: I1203 21:46:45.716513 4765 generic.go:334] "Generic (PLEG): container finished" podID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerID="32b0fd791bc93f18a1091072a560155ea162314c8beab2a2db8462f7aa2ee889" exitCode=0 Dec 03 21:46:45 crc kubenswrapper[4765]: I1203 21:46:45.716608 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerDied","Data":"32b0fd791bc93f18a1091072a560155ea162314c8beab2a2db8462f7aa2ee889"} Dec 03 21:46:46 crc kubenswrapper[4765]: I1203 21:46:46.729892 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerStarted","Data":"5fd5999ba96fd4c9687edb9cbc67854fa0e6627f85aae82eb89c1fd37a547bf5"} Dec 03 21:46:46 crc kubenswrapper[4765]: I1203 21:46:46.758187 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nv9wn" podStartSLOduration=3.36417708 podStartE2EDuration="5.75816381s" podCreationTimestamp="2025-12-03 21:46:41 +0000 UTC" firstStartedPulling="2025-12-03 21:46:43.699000059 +0000 UTC m=+4101.629545220" lastFinishedPulling="2025-12-03 21:46:46.092986799 +0000 UTC m=+4104.023531950" observedRunningTime="2025-12-03 21:46:46.752657671 +0000 UTC m=+4104.683202842" watchObservedRunningTime="2025-12-03 21:46:46.75816381 +0000 UTC m=+4104.688708961" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.275958 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-n78jt_b248c7e1-c2a2-4c22-ab0f-fb221be60e58/controller/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.283376 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-n78jt_b248c7e1-c2a2-4c22-ab0f-fb221be60e58/kube-rbac-proxy/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.437780 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.623258 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.670514 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.701627 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.707057 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.902642 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.935050 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.956684 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:46:47 crc kubenswrapper[4765]: I1203 21:46:47.973669 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.114116 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-frr-files/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.124190 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/controller/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.138718 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-reloader/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.171070 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/cp-metrics/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.299629 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/frr-metrics/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.368581 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/kube-rbac-proxy/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.397472 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/kube-rbac-proxy-frr/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.562950 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/reloader/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.617005 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-l7nmz_e875bdde-0dbd-40b6-a84c-1bdd7e4baabf/frr-k8s-webhook-server/0.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.848556 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-869886bfd4-t75fk_56245235-eef6-472d-b481-1b9d7f80b89c/manager/1.log" Dec 03 21:46:48 crc kubenswrapper[4765]: I1203 21:46:48.905722 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-869886bfd4-t75fk_56245235-eef6-472d-b481-1b9d7f80b89c/manager/0.log" Dec 03 21:46:49 crc kubenswrapper[4765]: I1203 21:46:49.054932 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7494cc9b6f-4zr8g_79c4ff63-6cbc-4fd7-9c7f-70bd9004d94a/webhook-server/0.log" Dec 03 21:46:49 crc kubenswrapper[4765]: I1203 21:46:49.319793 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvsp4_fed0fc97-3d14-4716-ad43-4c3bfd606850/kube-rbac-proxy/0.log" Dec 03 21:46:49 crc kubenswrapper[4765]: I1203 21:46:49.359424 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:46:49 crc kubenswrapper[4765]: E1203 21:46:49.359665 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:46:49 crc kubenswrapper[4765]: I1203 21:46:49.588364 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rtzp2_d3648e48-1afd-42ec-9aec-4d91958639b9/frr/0.log" Dec 03 21:46:49 crc kubenswrapper[4765]: I1203 21:46:49.660737 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qvsp4_fed0fc97-3d14-4716-ad43-4c3bfd606850/speaker/0.log" Dec 03 21:46:52 crc kubenswrapper[4765]: I1203 21:46:52.346592 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:52 crc kubenswrapper[4765]: I1203 21:46:52.347076 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:52 crc kubenswrapper[4765]: I1203 21:46:52.414443 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:52 crc kubenswrapper[4765]: I1203 21:46:52.884164 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:52 crc kubenswrapper[4765]: I1203 21:46:52.948743 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:54 crc kubenswrapper[4765]: I1203 21:46:54.827245 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nv9wn" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="registry-server" containerID="cri-o://5fd5999ba96fd4c9687edb9cbc67854fa0e6627f85aae82eb89c1fd37a547bf5" gracePeriod=2 Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.841128 4765 generic.go:334] "Generic (PLEG): container finished" podID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerID="5fd5999ba96fd4c9687edb9cbc67854fa0e6627f85aae82eb89c1fd37a547bf5" exitCode=0 Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.841202 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerDied","Data":"5fd5999ba96fd4c9687edb9cbc67854fa0e6627f85aae82eb89c1fd37a547bf5"} Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.841477 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nv9wn" event={"ID":"7967d80d-c38e-4eee-937c-c6ddffce4237","Type":"ContainerDied","Data":"f5fcfef2d2b97c979405c36cd670932206be468a07cc7c6c9dce88bf71ddb191"} Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.841492 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5fcfef2d2b97c979405c36cd670932206be468a07cc7c6c9dce88bf71ddb191" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.853000 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.868801 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content\") pod \"7967d80d-c38e-4eee-937c-c6ddffce4237\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.868928 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities\") pod \"7967d80d-c38e-4eee-937c-c6ddffce4237\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.868999 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x65v\" (UniqueName: \"kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v\") pod \"7967d80d-c38e-4eee-937c-c6ddffce4237\" (UID: \"7967d80d-c38e-4eee-937c-c6ddffce4237\") " Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.869837 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities" (OuterVolumeSpecName: "utilities") pod "7967d80d-c38e-4eee-937c-c6ddffce4237" (UID: "7967d80d-c38e-4eee-937c-c6ddffce4237"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.874778 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v" (OuterVolumeSpecName: "kube-api-access-8x65v") pod "7967d80d-c38e-4eee-937c-c6ddffce4237" (UID: "7967d80d-c38e-4eee-937c-c6ddffce4237"). InnerVolumeSpecName "kube-api-access-8x65v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.938603 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7967d80d-c38e-4eee-937c-c6ddffce4237" (UID: "7967d80d-c38e-4eee-937c-c6ddffce4237"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.972462 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.972550 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7967d80d-c38e-4eee-937c-c6ddffce4237-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:46:55 crc kubenswrapper[4765]: I1203 21:46:55.972567 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x65v\" (UniqueName: \"kubernetes.io/projected/7967d80d-c38e-4eee-937c-c6ddffce4237-kube-api-access-8x65v\") on node \"crc\" DevicePath \"\"" Dec 03 21:46:56 crc kubenswrapper[4765]: I1203 21:46:56.852342 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nv9wn" Dec 03 21:46:56 crc kubenswrapper[4765]: I1203 21:46:56.886387 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:56 crc kubenswrapper[4765]: I1203 21:46:56.895010 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nv9wn"] Dec 03 21:46:58 crc kubenswrapper[4765]: I1203 21:46:58.378275 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" path="/var/lib/kubelet/pods/7967d80d-c38e-4eee-937c-c6ddffce4237/volumes" Dec 03 21:47:00 crc kubenswrapper[4765]: I1203 21:47:00.359786 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:47:00 crc kubenswrapper[4765]: E1203 21:47:00.360761 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.322467 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.490870 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.517851 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.518101 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.671141 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/extract/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.674376 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/util/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.703376 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fhd7qj_3720cd85-f431-48f8-8914-2c4196029b6f/pull/0.log" Dec 03 21:47:05 crc kubenswrapper[4765]: I1203 21:47:05.829222 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.008101 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.017950 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.075607 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.211589 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/util/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.214538 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/extract/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.235413 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83lzjdj_56131f70-b87d-4e36-a680-eab8d3bbee72/pull/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.451946 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.604224 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.606012 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.630469 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.826357 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-content/0.log" Dec 03 21:47:06 crc kubenswrapper[4765]: I1203 21:47:06.831442 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/extract-utilities/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.050574 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mvpff_d900a1a5-3df1-4443-a451-301f156d5c07/registry-server/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.077354 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.238369 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.271454 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.277072 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.448200 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-utilities/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.460138 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/extract-content/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.790904 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztctx_72a5b180-7b23-4bfd-a10b-c35f73c732aa/marketplace-operator/0.log" Dec 03 21:47:07 crc kubenswrapper[4765]: I1203 21:47:07.848877 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ztctx_72a5b180-7b23-4bfd-a10b-c35f73c732aa/marketplace-operator/1.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.007197 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.037730 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-llbzl_093c506d-ed96-47c2-8e8d-c499d82381e5/registry-server/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.169819 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.177245 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.226655 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.384247 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-utilities/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.429086 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.497268 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.560423 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k92tm_aa9f4500-9c6f-4415-bea7-eebfda74d3ee/registry-server/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.680114 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.680600 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.706554 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.866891 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-content/0.log" Dec 03 21:47:08 crc kubenswrapper[4765]: I1203 21:47:08.867016 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/extract-utilities/0.log" Dec 03 21:47:09 crc kubenswrapper[4765]: I1203 21:47:09.320411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftrfh_80d51b27-d825-4e91-81bd-8e3297c4f550/registry-server/0.log" Dec 03 21:47:12 crc kubenswrapper[4765]: I1203 21:47:12.369245 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:47:12 crc kubenswrapper[4765]: E1203 21:47:12.370986 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:47:27 crc kubenswrapper[4765]: I1203 21:47:27.366757 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:47:27 crc kubenswrapper[4765]: E1203 21:47:27.367553 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:47:40 crc kubenswrapper[4765]: I1203 21:47:40.360836 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:47:40 crc kubenswrapper[4765]: E1203 21:47:40.361784 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:47:51 crc kubenswrapper[4765]: I1203 21:47:51.360410 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:47:51 crc kubenswrapper[4765]: E1203 21:47:51.361450 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.719619 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:47:58 crc kubenswrapper[4765]: E1203 21:47:58.720471 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="registry-server" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.720487 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="registry-server" Dec 03 21:47:58 crc kubenswrapper[4765]: E1203 21:47:58.720510 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="extract-utilities" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.720518 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="extract-utilities" Dec 03 21:47:58 crc kubenswrapper[4765]: E1203 21:47:58.720543 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="extract-content" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.720553 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="extract-content" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.720750 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7967d80d-c38e-4eee-937c-c6ddffce4237" containerName="registry-server" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.722220 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.722515 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.831066 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.831365 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghwz\" (UniqueName: \"kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.831395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.933445 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.933522 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ghwz\" (UniqueName: \"kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.933542 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.934035 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.934240 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:58 crc kubenswrapper[4765]: I1203 21:47:58.960094 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ghwz\" (UniqueName: \"kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz\") pod \"redhat-operators-v62nc\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:59 crc kubenswrapper[4765]: I1203 21:47:59.071899 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:47:59 crc kubenswrapper[4765]: I1203 21:47:59.566431 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:48:00 crc kubenswrapper[4765]: I1203 21:48:00.548680 4765 generic.go:334] "Generic (PLEG): container finished" podID="fbb79c95-f025-422d-878f-7c848bd8e554" containerID="bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5" exitCode=0 Dec 03 21:48:00 crc kubenswrapper[4765]: I1203 21:48:00.548783 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerDied","Data":"bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5"} Dec 03 21:48:00 crc kubenswrapper[4765]: I1203 21:48:00.549106 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerStarted","Data":"4622641ffdee199c7c127c195573214244c7d5c4d1cc7d4e3ff54fa2870db7ef"} Dec 03 21:48:00 crc kubenswrapper[4765]: I1203 21:48:00.553340 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 21:48:01 crc kubenswrapper[4765]: I1203 21:48:01.566763 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerStarted","Data":"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13"} Dec 03 21:48:02 crc kubenswrapper[4765]: I1203 21:48:02.580168 4765 generic.go:334] "Generic (PLEG): container finished" podID="fbb79c95-f025-422d-878f-7c848bd8e554" containerID="e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13" exitCode=0 Dec 03 21:48:02 crc kubenswrapper[4765]: I1203 21:48:02.580223 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerDied","Data":"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13"} Dec 03 21:48:03 crc kubenswrapper[4765]: I1203 21:48:03.362158 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:48:03 crc kubenswrapper[4765]: E1203 21:48:03.363114 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-swqqp_openshift-machine-config-operator(f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" Dec 03 21:48:03 crc kubenswrapper[4765]: I1203 21:48:03.592362 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerStarted","Data":"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06"} Dec 03 21:48:03 crc kubenswrapper[4765]: I1203 21:48:03.627646 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v62nc" podStartSLOduration=3.188391609 podStartE2EDuration="5.627617506s" podCreationTimestamp="2025-12-03 21:47:58 +0000 UTC" firstStartedPulling="2025-12-03 21:48:00.553019826 +0000 UTC m=+4178.483564997" lastFinishedPulling="2025-12-03 21:48:02.992245743 +0000 UTC m=+4180.922790894" observedRunningTime="2025-12-03 21:48:03.613487343 +0000 UTC m=+4181.544032504" watchObservedRunningTime="2025-12-03 21:48:03.627617506 +0000 UTC m=+4181.558162667" Dec 03 21:48:09 crc kubenswrapper[4765]: I1203 21:48:09.072905 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:09 crc kubenswrapper[4765]: I1203 21:48:09.074707 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:10 crc kubenswrapper[4765]: I1203 21:48:10.145252 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v62nc" podUID="fbb79c95-f025-422d-878f-7c848bd8e554" containerName="registry-server" probeResult="failure" output=< Dec 03 21:48:10 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Dec 03 21:48:10 crc kubenswrapper[4765]: > Dec 03 21:48:18 crc kubenswrapper[4765]: I1203 21:48:18.360557 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:48:18 crc kubenswrapper[4765]: I1203 21:48:18.777230 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"6f91eadfaac6aef1ff17214c89012b076ecebdadbc62b380dc21db16e1b7dfe0"} Dec 03 21:48:19 crc kubenswrapper[4765]: I1203 21:48:19.147879 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:19 crc kubenswrapper[4765]: I1203 21:48:19.236131 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:19 crc kubenswrapper[4765]: I1203 21:48:19.394839 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:48:20 crc kubenswrapper[4765]: I1203 21:48:20.794281 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v62nc" podUID="fbb79c95-f025-422d-878f-7c848bd8e554" containerName="registry-server" containerID="cri-o://f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06" gracePeriod=2 Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.279273 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.361006 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ghwz\" (UniqueName: \"kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz\") pod \"fbb79c95-f025-422d-878f-7c848bd8e554\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.361138 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities\") pod \"fbb79c95-f025-422d-878f-7c848bd8e554\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.361290 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content\") pod \"fbb79c95-f025-422d-878f-7c848bd8e554\" (UID: \"fbb79c95-f025-422d-878f-7c848bd8e554\") " Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.362176 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities" (OuterVolumeSpecName: "utilities") pod "fbb79c95-f025-422d-878f-7c848bd8e554" (UID: "fbb79c95-f025-422d-878f-7c848bd8e554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.392417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz" (OuterVolumeSpecName: "kube-api-access-6ghwz") pod "fbb79c95-f025-422d-878f-7c848bd8e554" (UID: "fbb79c95-f025-422d-878f-7c848bd8e554"). InnerVolumeSpecName "kube-api-access-6ghwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.465340 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ghwz\" (UniqueName: \"kubernetes.io/projected/fbb79c95-f025-422d-878f-7c848bd8e554-kube-api-access-6ghwz\") on node \"crc\" DevicePath \"\"" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.465731 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.489775 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbb79c95-f025-422d-878f-7c848bd8e554" (UID: "fbb79c95-f025-422d-878f-7c848bd8e554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.567802 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb79c95-f025-422d-878f-7c848bd8e554-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.809570 4765 generic.go:334] "Generic (PLEG): container finished" podID="fbb79c95-f025-422d-878f-7c848bd8e554" containerID="f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06" exitCode=0 Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.809635 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerDied","Data":"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06"} Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.809653 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v62nc" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.809695 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v62nc" event={"ID":"fbb79c95-f025-422d-878f-7c848bd8e554","Type":"ContainerDied","Data":"4622641ffdee199c7c127c195573214244c7d5c4d1cc7d4e3ff54fa2870db7ef"} Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.809725 4765 scope.go:117] "RemoveContainer" containerID="f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.834877 4765 scope.go:117] "RemoveContainer" containerID="e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.878350 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.881342 4765 scope.go:117] "RemoveContainer" containerID="bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.888180 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v62nc"] Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.942815 4765 scope.go:117] "RemoveContainer" containerID="f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06" Dec 03 21:48:21 crc kubenswrapper[4765]: E1203 21:48:21.943317 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06\": container with ID starting with f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06 not found: ID does not exist" containerID="f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.943351 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06"} err="failed to get container status \"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06\": rpc error: code = NotFound desc = could not find container \"f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06\": container with ID starting with f37f06e01174076f8571d8051a038f9b9bc5ed6d2273afe9ec6ee01a9dddcf06 not found: ID does not exist" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.943370 4765 scope.go:117] "RemoveContainer" containerID="e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13" Dec 03 21:48:21 crc kubenswrapper[4765]: E1203 21:48:21.943933 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13\": container with ID starting with e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13 not found: ID does not exist" containerID="e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.943958 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13"} err="failed to get container status \"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13\": rpc error: code = NotFound desc = could not find container \"e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13\": container with ID starting with e97e2f940ebbb5649e6260bbc351eb3d1d6af51a18642dfe3ec91f9760448f13 not found: ID does not exist" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.943974 4765 scope.go:117] "RemoveContainer" containerID="bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5" Dec 03 21:48:21 crc kubenswrapper[4765]: E1203 21:48:21.944200 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5\": container with ID starting with bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5 not found: ID does not exist" containerID="bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5" Dec 03 21:48:21 crc kubenswrapper[4765]: I1203 21:48:21.944216 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5"} err="failed to get container status \"bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5\": rpc error: code = NotFound desc = could not find container \"bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5\": container with ID starting with bfe230e61a1c8fa48ad377ad74c2f4f909cfbc7386e810f87b8d3b9388443bc5 not found: ID does not exist" Dec 03 21:48:22 crc kubenswrapper[4765]: I1203 21:48:22.387440 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb79c95-f025-422d-878f-7c848bd8e554" path="/var/lib/kubelet/pods/fbb79c95-f025-422d-878f-7c848bd8e554/volumes" Dec 03 21:48:31 crc kubenswrapper[4765]: I1203 21:48:31.940585 4765 scope.go:117] "RemoveContainer" containerID="397d8e9eb1f3651983463d37b0f7c1cf22fd00b2f51ef42d5203c2d445271fca" Dec 03 21:48:53 crc kubenswrapper[4765]: I1203 21:48:53.243480 4765 generic.go:334] "Generic (PLEG): container finished" podID="948659df-cbfd-48fb-8ee2-d68fcd7fb58a" containerID="00834273d96d24236cb54e942907c8f73f5c749a1652d0967412f84e5db1537e" exitCode=0 Dec 03 21:48:53 crc kubenswrapper[4765]: I1203 21:48:53.243975 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" event={"ID":"948659df-cbfd-48fb-8ee2-d68fcd7fb58a","Type":"ContainerDied","Data":"00834273d96d24236cb54e942907c8f73f5c749a1652d0967412f84e5db1537e"} Dec 03 21:48:53 crc kubenswrapper[4765]: I1203 21:48:53.244938 4765 scope.go:117] "RemoveContainer" containerID="00834273d96d24236cb54e942907c8f73f5c749a1652d0967412f84e5db1537e" Dec 03 21:48:53 crc kubenswrapper[4765]: I1203 21:48:53.825992 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfd6x_must-gather-4hw8w_948659df-cbfd-48fb-8ee2-d68fcd7fb58a/gather/0.log" Dec 03 21:49:03 crc kubenswrapper[4765]: I1203 21:49:03.881734 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfd6x/must-gather-4hw8w"] Dec 03 21:49:03 crc kubenswrapper[4765]: I1203 21:49:03.882654 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" podUID="948659df-cbfd-48fb-8ee2-d68fcd7fb58a" containerName="copy" containerID="cri-o://ba71757afbdd3207104f3eed8717b47daf75194d943f4fd218139032f411261c" gracePeriod=2 Dec 03 21:49:03 crc kubenswrapper[4765]: I1203 21:49:03.892391 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfd6x/must-gather-4hw8w"] Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.410190 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfd6x_must-gather-4hw8w_948659df-cbfd-48fb-8ee2-d68fcd7fb58a/copy/0.log" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.410850 4765 generic.go:334] "Generic (PLEG): container finished" podID="948659df-cbfd-48fb-8ee2-d68fcd7fb58a" containerID="ba71757afbdd3207104f3eed8717b47daf75194d943f4fd218139032f411261c" exitCode=143 Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.410898 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2954ae2815358d6e82df5cd8d253cbf805541117a62c39a1935c947c351c10d1" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.445483 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfd6x_must-gather-4hw8w_948659df-cbfd-48fb-8ee2-d68fcd7fb58a/copy/0.log" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.445998 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.573382 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqdn\" (UniqueName: \"kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn\") pod \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.573541 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output\") pod \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\" (UID: \"948659df-cbfd-48fb-8ee2-d68fcd7fb58a\") " Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.579284 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn" (OuterVolumeSpecName: "kube-api-access-vzqdn") pod "948659df-cbfd-48fb-8ee2-d68fcd7fb58a" (UID: "948659df-cbfd-48fb-8ee2-d68fcd7fb58a"). InnerVolumeSpecName "kube-api-access-vzqdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.675844 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqdn\" (UniqueName: \"kubernetes.io/projected/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-kube-api-access-vzqdn\") on node \"crc\" DevicePath \"\"" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.727257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "948659df-cbfd-48fb-8ee2-d68fcd7fb58a" (UID: "948659df-cbfd-48fb-8ee2-d68fcd7fb58a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 21:49:04 crc kubenswrapper[4765]: I1203 21:49:04.777523 4765 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/948659df-cbfd-48fb-8ee2-d68fcd7fb58a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 21:49:05 crc kubenswrapper[4765]: I1203 21:49:05.419595 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfd6x/must-gather-4hw8w" Dec 03 21:49:06 crc kubenswrapper[4765]: I1203 21:49:06.374333 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948659df-cbfd-48fb-8ee2-d68fcd7fb58a" path="/var/lib/kubelet/pods/948659df-cbfd-48fb-8ee2-d68fcd7fb58a/volumes" Dec 03 21:49:32 crc kubenswrapper[4765]: I1203 21:49:32.001465 4765 scope.go:117] "RemoveContainer" containerID="cb12c803a1ca5eca5afaaf706beadd251f79d5ca477d285d71ebcbc0035e8047" Dec 03 21:49:32 crc kubenswrapper[4765]: I1203 21:49:32.033268 4765 scope.go:117] "RemoveContainer" containerID="ba71757afbdd3207104f3eed8717b47daf75194d943f4fd218139032f411261c" Dec 03 21:49:32 crc kubenswrapper[4765]: I1203 21:49:32.058305 4765 scope.go:117] "RemoveContainer" containerID="44643b6e864e55d1c57b00de2fe305a7d705211dc0f3dc83853979038984061d" Dec 03 21:49:32 crc kubenswrapper[4765]: I1203 21:49:32.092547 4765 scope.go:117] "RemoveContainer" containerID="446fdaa6190632a4478e981155563e2b54151103ceb58084cec352ae3779f1f0" Dec 03 21:49:32 crc kubenswrapper[4765]: I1203 21:49:32.131782 4765 scope.go:117] "RemoveContainer" containerID="00834273d96d24236cb54e942907c8f73f5c749a1652d0967412f84e5db1537e" Dec 03 21:50:24 crc kubenswrapper[4765]: I1203 21:50:24.798493 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:50:24 crc kubenswrapper[4765]: I1203 21:50:24.799382 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:50:32 crc kubenswrapper[4765]: I1203 21:50:32.356624 4765 scope.go:117] "RemoveContainer" containerID="c8f33190d190c246802102b388db12afa79a2dcd6769b3f82106beb0e15e769e" Dec 03 21:50:54 crc kubenswrapper[4765]: I1203 21:50:54.798073 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:50:54 crc kubenswrapper[4765]: I1203 21:50:54.798674 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:51:24 crc kubenswrapper[4765]: I1203 21:51:24.798873 4765 patch_prober.go:28] interesting pod/machine-config-daemon-swqqp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 21:51:24 crc kubenswrapper[4765]: I1203 21:51:24.799644 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 21:51:24 crc kubenswrapper[4765]: I1203 21:51:24.799714 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" Dec 03 21:51:24 crc kubenswrapper[4765]: I1203 21:51:24.800898 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f91eadfaac6aef1ff17214c89012b076ecebdadbc62b380dc21db16e1b7dfe0"} pod="openshift-machine-config-operator/machine-config-daemon-swqqp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 21:51:24 crc kubenswrapper[4765]: I1203 21:51:24.800993 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" podUID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerName="machine-config-daemon" containerID="cri-o://6f91eadfaac6aef1ff17214c89012b076ecebdadbc62b380dc21db16e1b7dfe0" gracePeriod=600 Dec 03 21:51:29 crc kubenswrapper[4765]: I1203 21:51:29.646734 4765 generic.go:334] "Generic (PLEG): container finished" podID="f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5" containerID="6f91eadfaac6aef1ff17214c89012b076ecebdadbc62b380dc21db16e1b7dfe0" exitCode=0 Dec 03 21:51:29 crc kubenswrapper[4765]: I1203 21:51:29.646896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerDied","Data":"6f91eadfaac6aef1ff17214c89012b076ecebdadbc62b380dc21db16e1b7dfe0"} Dec 03 21:51:29 crc kubenswrapper[4765]: I1203 21:51:29.647342 4765 scope.go:117] "RemoveContainer" containerID="5f0e031fdf599ff168247cb5c06ce8af45c030e901fbf0a0c9657c32f773aaca" Dec 03 21:51:30 crc kubenswrapper[4765]: I1203 21:51:30.659862 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-swqqp" event={"ID":"f9e50fba-2d3f-420f-b6fb-cc6a7c8d9eb5","Type":"ContainerStarted","Data":"e5ae65d20986615288629b1c4947e940c7be62766624a865cdd84e0175bd0dee"} Dec 03 21:51:32 crc kubenswrapper[4765]: I1203 21:51:32.426703 4765 scope.go:117] "RemoveContainer" containerID="3cabf5bd6c6b3e37bf4777a8f6959785c44e6e5c133850ec18ff8aff6450a335" Dec 03 21:51:32 crc kubenswrapper[4765]: I1203 21:51:32.473042 4765 scope.go:117] "RemoveContainer" containerID="0c4c016d395d1e3e495d2f2a4e035e6a593bef76f883f1aeb70935a696e05a0b" Dec 03 21:51:32 crc kubenswrapper[4765]: I1203 21:51:32.536366 4765 scope.go:117] "RemoveContainer" containerID="ce3e56cbba7fd4d906c10fb625895acb39c0b7ff08cbd6d2c93540a7518cc8f5"